Test Report: QEMU_macOS 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (98/275)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.83
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.89
33 TestAddons/parallel/Registry 71.26
46 TestCertOptions 10.29
47 TestCertExpiration 195.5
48 TestDockerFlags 10.54
49 TestForceSystemdFlag 10.3
50 TestForceSystemdEnv 11.78
95 TestFunctional/parallel/ServiceCmdConnect 29.49
167 TestMultiControlPlane/serial/StopSecondaryNode 214.13
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.39
169 TestMultiControlPlane/serial/RestartSecondaryNode 182.88
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.24
174 TestMultiControlPlane/serial/StopCluster 300.24
175 TestMultiControlPlane/serial/RestartCluster 5.26
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.03
177 TestMultiControlPlane/serial/AddSecondaryNode 0.1
181 TestImageBuild/serial/Setup 10.1
184 TestJSONOutput/start/Command 9.87
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.27
216 TestMountStart/serial/StartWithMountFirst 10.04
220 TestMultiNode/serial/FreshStart2Nodes 10.04
221 TestMultiNode/serial/DeployApp2Nodes 109.1
222 TestMultiNode/serial/PingHostFrom2Pods 0.09
223 TestMultiNode/serial/AddNode 0.07
224 TestMultiNode/serial/MultiNodeLabels 0.06
225 TestMultiNode/serial/ProfileList 0.08
226 TestMultiNode/serial/CopyFile 0.06
227 TestMultiNode/serial/StopNode 0.14
228 TestMultiNode/serial/StartAfterStop 47.84
229 TestMultiNode/serial/RestartKeepsNodes 8.81
230 TestMultiNode/serial/DeleteNode 0.1
231 TestMultiNode/serial/StopMultiNode 3.86
232 TestMultiNode/serial/RestartMultiNode 5.25
233 TestMultiNode/serial/ValidateNameConflict 20.65
237 TestPreload 9.98
239 TestScheduledStopUnix 10.1
240 TestSkaffold 12.18
243 TestRunningBinaryUpgrade 585.91
245 TestKubernetesUpgrade 18.63
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.04
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.38
261 TestStoppedBinaryUpgrade/Upgrade 591.24
263 TestPause/serial/Start 10.11
273 TestNoKubernetes/serial/StartWithK8s 9.88
274 TestNoKubernetes/serial/StartWithStopK8s 5.32
275 TestNoKubernetes/serial/Start 5.31
279 TestNoKubernetes/serial/StartNoArgs 5.35
281 TestNetworkPlugins/group/auto/Start 9.9
282 TestNetworkPlugins/group/kindnet/Start 9.98
283 TestNetworkPlugins/group/calico/Start 9.95
284 TestNetworkPlugins/group/custom-flannel/Start 9.96
285 TestNetworkPlugins/group/false/Start 9.83
286 TestNetworkPlugins/group/enable-default-cni/Start 10.04
287 TestNetworkPlugins/group/flannel/Start 9.79
288 TestNetworkPlugins/group/bridge/Start 9.81
289 TestNetworkPlugins/group/kubenet/Start 9.86
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.84
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 10.03
304 TestStartStop/group/no-preload/serial/DeployApp 0.09
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/no-preload/serial/SecondStart 5.23
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/no-preload/serial/Pause 0.1
314 TestStartStop/group/embed-certs/serial/FirstStart 9.92
315 TestStartStop/group/embed-certs/serial/DeployApp 0.09
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/embed-certs/serial/SecondStart 5.24
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/embed-certs/serial/Pause 0.1
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
327 TestStartStop/group/newest-cni/serial/FirstStart 10.69
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.38
337 TestStartStop/group/newest-cni/serial/SecondStart 5.25
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
345 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-445000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-445000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.832044875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6c9ea690-c7a1-4894-9c7f-84057c163d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-445000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"06dac9d9-27cc-4d61-bbde-701eb82f6063","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"a10aa005-1579-4e80-b1b9-7b52b52af9ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig"}}
	{"specversion":"1.0","id":"39a6b25e-1a9f-40a1-b027-5f45af996f6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8433144a-3309-43f6-a9cc-0e2848c57606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b835963d-54c3-4d4e-ac9b-638b5b01cbd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube"}}
	{"specversion":"1.0","id":"ee503fcd-bc92-4e75-a645-cbb6f86de6a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6b982e44-8834-4185-8661-ae555bb017d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a61a313-4dc9-4842-8ff7-c5db5312dce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a48b7a52-1124-4f07-bfb1-a94cb4fc0f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"67a42f6a-0d7a-4bdb-bf57-d20d8f2bb705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-445000\" primary control-plane node in \"download-only-445000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cbd410d-3fd2-475e-9fd3-cd5030a95c5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c57e6d39-0e32-4c17-9732-c8d9f4e377b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920] Decompressors:map[bz2:0x14000883af0 gz:0x14000883af8 tar:0x14000883aa0 tar.bz2:0x14000883ab0 tar.gz:0x14000883ac0 tar.xz:0x14000883ad0 tar.zst:0x14000883ae0 tbz2:0x14000883ab0 tgz:0x140
00883ac0 txz:0x14000883ad0 tzst:0x14000883ae0 xz:0x14000883b00 zip:0x14000883b10 zst:0x14000883b08] Getters:map[file:0x140009d8df0 http:0x140007565f0 https:0x14000756640] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"12da5abc-ba6c-4878-82f9-b03dcd57f5a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:04:53.212301    1434 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:04:53.212436    1434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:04:53.212440    1434 out.go:358] Setting ErrFile to fd 2...
	I0831 15:04:53.212443    1434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:04:53.212563    1434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	W0831 15:04:53.212642    1434 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18943-972/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18943-972/.minikube/config/config.json: no such file or directory
	I0831 15:04:53.213857    1434 out.go:352] Setting JSON to true
	I0831 15:04:53.231229    1434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":257,"bootTime":1725141636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:04:53.231291    1434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:04:53.235858    1434 out.go:97] [download-only-445000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:04:53.236036    1434 notify.go:220] Checking for updates...
	W0831 15:04:53.236024    1434 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 15:04:53.239726    1434 out.go:169] MINIKUBE_LOCATION=18943
	I0831 15:04:53.242766    1434 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:04:53.247752    1434 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:04:53.250803    1434 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:04:53.253781    1434 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	W0831 15:04:53.259817    1434 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 15:04:53.260038    1434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:04:53.264799    1434 out.go:97] Using the qemu2 driver based on user configuration
	I0831 15:04:53.264819    1434 start.go:297] selected driver: qemu2
	I0831 15:04:53.264823    1434 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:04:53.264919    1434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:04:53.268786    1434 out.go:169] Automatically selected the socket_vmnet network
	I0831 15:04:53.274571    1434 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0831 15:04:53.274647    1434 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:04:53.274726    1434 cni.go:84] Creating CNI manager for ""
	I0831 15:04:53.274743    1434 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 15:04:53.274784    1434 start.go:340] cluster config:
	{Name:download-only-445000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-445000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:04:53.279852    1434 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:04:53.284805    1434 out.go:97] Downloading VM boot image ...
	I0831 15:04:53.284823    1434 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso
	I0831 15:04:58.394705    1434 out.go:97] Starting "download-only-445000" primary control-plane node in "download-only-445000" cluster
	I0831 15:04:58.394723    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:04:58.456694    1434 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 15:04:58.456730    1434 cache.go:56] Caching tarball of preloaded images
	I0831 15:04:58.456908    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:04:58.460948    1434 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 15:04:58.460955    1434 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:04:58.539868    1434 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 15:05:04.741593    1434 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:05:04.741766    1434 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:05:05.436531    1434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 15:05:05.436712    1434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/download-only-445000/config.json ...
	I0831 15:05:05.436728    1434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/download-only-445000/config.json: {Name:mk7632ce36e65a62ce12d632696257046a80f598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:05.436953    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:05:05.437143    1434 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0831 15:05:05.966382    1434 out.go:193] 
	W0831 15:05:05.974431    1434 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920] Decompressors:map[bz2:0x14000883af0 gz:0x14000883af8 tar:0x14000883aa0 tar.bz2:0x14000883ab0 tar.gz:0x14000883ac0 tar.xz:0x14000883ad0 tar.zst:0x14000883ae0 tbz2:0x14000883ab0 tgz:0x14000883ac0 txz:0x14000883ad0 tzst:0x14000883ae0 xz:0x14000883b00 zip:0x14000883b10 zst:0x14000883b08] Getters:map[file:0x140009d8df0 http:0x140007565f0 https:0x14000756640] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0831 15:05:05.974454    1434 out_reason.go:110] 
	W0831 15:05:05.983133    1434 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:05:05.987263    1434 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-445000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-236000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-236000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.731530125s)

                                                
                                                
-- stdout --
	* [offline-docker-236000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-236000" primary control-plane node in "offline-docker-236000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-236000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:50:35.285666    4022 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:50:35.285798    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:35.285801    4022 out.go:358] Setting ErrFile to fd 2...
	I0831 15:50:35.285804    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:35.285938    4022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:50:35.287159    4022 out.go:352] Setting JSON to false
	I0831 15:50:35.305178    4022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2999,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:50:35.305251    4022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:50:35.310154    4022 out.go:177] * [offline-docker-236000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:50:35.318072    4022 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:50:35.318074    4022 notify.go:220] Checking for updates...
	I0831 15:50:35.324091    4022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:50:35.327057    4022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:50:35.330049    4022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:50:35.333041    4022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:50:35.336030    4022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:50:35.339433    4022 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:35.339491    4022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:50:35.342992    4022 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:50:35.350088    4022 start.go:297] selected driver: qemu2
	I0831 15:50:35.350098    4022 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:50:35.350105    4022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:50:35.351991    4022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:50:35.355023    4022 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:50:35.358117    4022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:50:35.358140    4022 cni.go:84] Creating CNI manager for ""
	I0831 15:50:35.358147    4022 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:50:35.358155    4022 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:50:35.358208    4022 start.go:340] cluster config:
	{Name:offline-docker-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:50:35.361823    4022 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:35.369026    4022 out.go:177] * Starting "offline-docker-236000" primary control-plane node in "offline-docker-236000" cluster
	I0831 15:50:35.373066    4022 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:50:35.373089    4022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:50:35.373099    4022 cache.go:56] Caching tarball of preloaded images
	I0831 15:50:35.373177    4022 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:50:35.373184    4022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:50:35.373242    4022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/offline-docker-236000/config.json ...
	I0831 15:50:35.373253    4022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/offline-docker-236000/config.json: {Name:mk22e081a347a4d7d35bea8805d97007da9d144e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:50:35.373471    4022 start.go:360] acquireMachinesLock for offline-docker-236000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:35.373507    4022 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "offline-docker-236000"
	I0831 15:50:35.373523    4022 start.go:93] Provisioning new machine with config: &{Name:offline-docker-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:35.373563    4022 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:35.382037    4022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:35.397806    4022 start.go:159] libmachine.API.Create for "offline-docker-236000" (driver="qemu2")
	I0831 15:50:35.397836    4022 client.go:168] LocalClient.Create starting
	I0831 15:50:35.397916    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:35.397947    4022 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:35.397958    4022 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:35.398000    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:35.398024    4022 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:35.398032    4022 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:35.398391    4022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:35.557293    4022 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:35.599204    4022 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:35.599215    4022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:35.599420    4022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:35.609471    4022 main.go:141] libmachine: STDOUT: 
	I0831 15:50:35.609500    4022 main.go:141] libmachine: STDERR: 
	I0831 15:50:35.609566    4022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2 +20000M
	I0831 15:50:35.618747    4022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:35.618773    4022 main.go:141] libmachine: STDERR: 
	I0831 15:50:35.618793    4022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:35.618807    4022 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:35.618820    4022 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:35.618852    4022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bd:0b:d3:23:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:35.620733    4022 main.go:141] libmachine: STDOUT: 
	I0831 15:50:35.620752    4022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:35.620768    4022 client.go:171] duration metric: took 222.933584ms to LocalClient.Create
	I0831 15:50:37.622473    4022 start.go:128] duration metric: took 2.248960041s to createHost
	I0831 15:50:37.622484    4022 start.go:83] releasing machines lock for "offline-docker-236000", held for 2.249027417s
	W0831 15:50:37.622495    4022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:37.629562    4022 out.go:177] * Deleting "offline-docker-236000" in qemu2 ...
	W0831 15:50:37.639656    4022 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:37.639667    4022 start.go:729] Will try again in 5 seconds ...
	I0831 15:50:42.641645    4022 start.go:360] acquireMachinesLock for offline-docker-236000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:42.641776    4022 start.go:364] duration metric: took 95.833µs to acquireMachinesLock for "offline-docker-236000"
	I0831 15:50:42.641809    4022 start.go:93] Provisioning new machine with config: &{Name:offline-docker-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:42.641853    4022 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:42.653004    4022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:42.668947    4022 start.go:159] libmachine.API.Create for "offline-docker-236000" (driver="qemu2")
	I0831 15:50:42.668978    4022 client.go:168] LocalClient.Create starting
	I0831 15:50:42.669043    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:42.669081    4022 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:42.669092    4022 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:42.669146    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:42.669168    4022 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:42.669183    4022 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:42.669474    4022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:42.835401    4022 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:42.924501    4022 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:42.924519    4022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:42.924741    4022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:42.934813    4022 main.go:141] libmachine: STDOUT: 
	I0831 15:50:42.934846    4022 main.go:141] libmachine: STDERR: 
	I0831 15:50:42.934925    4022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2 +20000M
	I0831 15:50:42.944715    4022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:42.944747    4022 main.go:141] libmachine: STDERR: 
	I0831 15:50:42.944767    4022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:42.944772    4022 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:42.944785    4022 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:42.944829    4022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:70:60:c9:c8:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/offline-docker-236000/disk.qcow2
	I0831 15:50:42.946831    4022 main.go:141] libmachine: STDOUT: 
	I0831 15:50:42.946853    4022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:42.946865    4022 client.go:171] duration metric: took 277.890209ms to LocalClient.Create
	I0831 15:50:44.949098    4022 start.go:128] duration metric: took 2.307253583s to createHost
	I0831 15:50:44.949184    4022 start.go:83] releasing machines lock for "offline-docker-236000", held for 2.30745675s
	W0831 15:50:44.949565    4022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:44.959116    4022 out.go:201] 
	W0831 15:50:44.963194    4022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:50:44.963243    4022 out.go:270] * 
	* 
	W0831 15:50:44.966106    4022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:50:44.974078    4022 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-236000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-31 15:50:44.989436 -0700 PDT m=+2751.931066876
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-236000 -n offline-docker-236000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-236000 -n offline-docker-236000: exit status 7 (68.863667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "offline-docker-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "offline-docker-236000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-236000
--- FAIL: TestOffline (9.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.334125ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-dbh5s" [f256bb8d-d37d-4371-a5a9-38e94d41cce2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011021667s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-7mcwz" [bb84f056-88e3-4c3d-845b-eb2704863b95] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004728125s
addons_test.go:342: (dbg) Run:  kubectl --context addons-492000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-492000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-492000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.053374542s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-492000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 ip
2024/08/31 15:17:22 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-492000 -n addons-492000
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 logs -n 25
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:04 PDT |                     |
	|         | -p download-only-445000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| delete  | -p download-only-445000              | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| start   | -o=json --download-only              | download-only-326000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT |                     |
	|         | -p download-only-326000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| delete  | -p download-only-326000              | download-only-326000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| delete  | -p download-only-445000              | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| delete  | -p download-only-326000              | download-only-326000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| start   | --download-only -p                   | binary-mirror-803000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT |                     |
	|         | binary-mirror-803000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49311               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-803000              | binary-mirror-803000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| addons  | disable dashboard -p                 | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT |                     |
	|         | addons-492000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT |                     |
	|         | addons-492000                        |                      |         |         |                     |                     |
	| start   | -p addons-492000 --wait=true         | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:07 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-492000 addons disable         | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:08 PDT | 31 Aug 24 15:08 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-492000 addons                 | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:16 PDT | 31 Aug 24 15:16 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-492000 addons                 | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:16 PDT | 31 Aug 24 15:16 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-492000 addons                 | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:17 PDT | 31 Aug 24 15:17 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:17 PDT | 31 Aug 24 15:17 PDT |
	|         | addons-492000                        |                      |         |         |                     |                     |
	| ip      | addons-492000 ip                     | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:17 PDT | 31 Aug 24 15:17 PDT |
	| addons  | addons-492000 addons disable         | addons-492000        | jenkins | v1.33.1 | 31 Aug 24 15:17 PDT | 31 Aug 24 15:17 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 15:05:14
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 15:05:14.363464    1510 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:05:14.363654    1510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:05:14.363658    1510 out.go:358] Setting ErrFile to fd 2...
	I0831 15:05:14.363663    1510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:05:14.363772    1510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:05:14.364877    1510 out.go:352] Setting JSON to false
	I0831 15:05:14.380794    1510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":278,"bootTime":1725141636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:05:14.380873    1510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:05:14.385935    1510 out.go:177] * [addons-492000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:05:14.392976    1510 notify.go:220] Checking for updates...
	I0831 15:05:14.396926    1510 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:05:14.399907    1510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:05:14.402901    1510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:05:14.405929    1510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:05:14.408922    1510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:05:14.411918    1510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:05:14.413444    1510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:05:14.417863    1510 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:05:14.424743    1510 start.go:297] selected driver: qemu2
	I0831 15:05:14.424749    1510 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:05:14.424755    1510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:05:14.426846    1510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:05:14.430926    1510 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:05:14.434024    1510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:05:14.434047    1510 cni.go:84] Creating CNI manager for ""
	I0831 15:05:14.434055    1510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:05:14.434059    1510 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:05:14.434093    1510 start.go:340] cluster config:
	{Name:addons-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:05:14.437817    1510 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:05:14.446888    1510 out.go:177] * Starting "addons-492000" primary control-plane node in "addons-492000" cluster
	I0831 15:05:14.450907    1510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:05:14.450924    1510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:05:14.450934    1510 cache.go:56] Caching tarball of preloaded images
	I0831 15:05:14.450984    1510 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:05:14.450990    1510 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:05:14.451183    1510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/config.json ...
	I0831 15:05:14.451194    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/config.json: {Name:mk15bfa00bce97b02dc86ad189826a20d2482deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:14.451645    1510 start.go:360] acquireMachinesLock for addons-492000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:05:14.451712    1510 start.go:364] duration metric: took 61.666µs to acquireMachinesLock for "addons-492000"
	I0831 15:05:14.451722    1510 start.go:93] Provisioning new machine with config: &{Name:addons-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:05:14.451753    1510 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:05:14.459909    1510 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0831 15:05:14.680628    1510 start.go:159] libmachine.API.Create for "addons-492000" (driver="qemu2")
	I0831 15:05:14.680681    1510 client.go:168] LocalClient.Create starting
	I0831 15:05:14.680852    1510 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:05:14.785312    1510 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:05:14.830932    1510 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:05:15.095009    1510 main.go:141] libmachine: Creating SSH key...
	I0831 15:05:15.234957    1510 main.go:141] libmachine: Creating Disk image...
	I0831 15:05:15.234963    1510 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:05:15.235235    1510 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2
	I0831 15:05:15.254136    1510 main.go:141] libmachine: STDOUT: 
	I0831 15:05:15.254165    1510 main.go:141] libmachine: STDERR: 
	I0831 15:05:15.254211    1510 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2 +20000M
	I0831 15:05:15.262157    1510 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:05:15.262172    1510 main.go:141] libmachine: STDERR: 
	I0831 15:05:15.262185    1510 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2
	I0831 15:05:15.262189    1510 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:05:15.262228    1510 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:05:15.262259    1510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:54:99:61:a2:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/disk.qcow2
	I0831 15:05:15.318702    1510 main.go:141] libmachine: STDOUT: 
	I0831 15:05:15.318732    1510 main.go:141] libmachine: STDERR: 
	I0831 15:05:15.318735    1510 main.go:141] libmachine: Attempt 0
	I0831 15:05:15.318749    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:15.318803    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:15.318819    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:17.319943    1510 main.go:141] libmachine: Attempt 1
	I0831 15:05:17.320021    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:17.320478    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:17.320530    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:19.321711    1510 main.go:141] libmachine: Attempt 2
	I0831 15:05:19.321791    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:19.322154    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:19.322206    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:21.323356    1510 main.go:141] libmachine: Attempt 3
	I0831 15:05:21.323398    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:21.323498    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:21.323519    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:23.324524    1510 main.go:141] libmachine: Attempt 4
	I0831 15:05:23.324535    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:23.324565    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:23.324573    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:25.325583    1510 main.go:141] libmachine: Attempt 5
	I0831 15:05:25.325589    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:25.325618    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:25.325625    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:27.326632    1510 main.go:141] libmachine: Attempt 6
	I0831 15:05:27.326645    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:27.326709    1510 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0831 15:05:27.326717    1510 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d4e411}
	I0831 15:05:29.327836    1510 main.go:141] libmachine: Attempt 7
	I0831 15:05:29.327925    1510 main.go:141] libmachine: Searching for 3a:54:99:61:a2:61 in /var/db/dhcpd_leases ...
	I0831 15:05:29.328388    1510 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0831 15:05:29.328443    1510 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3a:54:99:61:a2:61 ID:1,3a:54:99:61:a2:61 Lease:0x66d4e527}
	I0831 15:05:29.328457    1510 main.go:141] libmachine: Found match: 3a:54:99:61:a2:61
	I0831 15:05:29.328488    1510 main.go:141] libmachine: IP: 192.168.105.2
	I0831 15:05:29.328511    1510 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0831 15:05:32.348410    1510 machine.go:93] provisionDockerMachine start ...
	I0831 15:05:32.349898    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.350406    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.350421    1510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 15:05:32.424756    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0831 15:05:32.424790    1510 buildroot.go:166] provisioning hostname "addons-492000"
	I0831 15:05:32.424899    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.425186    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.425198    1510 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-492000 && echo "addons-492000" | sudo tee /etc/hostname
	I0831 15:05:32.495362    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-492000
	
	I0831 15:05:32.495447    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.495609    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.495620    1510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-492000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-492000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-492000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 15:05:32.554795    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:05:32.554811    1510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18943-972/.minikube CaCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18943-972/.minikube}
	I0831 15:05:32.554830    1510 buildroot.go:174] setting up certificates
	I0831 15:05:32.554835    1510 provision.go:84] configureAuth start
	I0831 15:05:32.554838    1510 provision.go:143] copyHostCerts
	I0831 15:05:32.554950    1510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem (1679 bytes)
	I0831 15:05:32.555185    1510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem (1082 bytes)
	I0831 15:05:32.555303    1510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem (1123 bytes)
	I0831 15:05:32.555400    1510 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem org=jenkins.addons-492000 san=[127.0.0.1 192.168.105.2 addons-492000 localhost minikube]
	I0831 15:05:32.632950    1510 provision.go:177] copyRemoteCerts
	I0831 15:05:32.633018    1510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 15:05:32.633026    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:32.660099    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 15:05:32.668116    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 15:05:32.675964    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 15:05:32.683937    1510 provision.go:87] duration metric: took 129.100459ms to configureAuth
	I0831 15:05:32.683949    1510 buildroot.go:189] setting minikube options for container-runtime
	I0831 15:05:32.684080    1510 config.go:182] Loaded profile config "addons-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:05:32.684131    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.684226    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.684231    1510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 15:05:32.734699    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0831 15:05:32.734708    1510 buildroot.go:70] root file system type: tmpfs
	I0831 15:05:32.734759    1510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 15:05:32.734803    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.734913    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.734948    1510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 15:05:32.787309    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 15:05:32.787352    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:32.787457    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:32.787466    1510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 15:05:34.159064    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0831 15:05:34.159076    1510 machine.go:96] duration metric: took 1.810682209s to provisionDockerMachine
	I0831 15:05:34.159083    1510 client.go:171] duration metric: took 19.478855792s to LocalClient.Create
	I0831 15:05:34.159092    1510 start.go:167] duration metric: took 19.478930166s to libmachine.API.Create "addons-492000"
	I0831 15:05:34.159099    1510 start.go:293] postStartSetup for "addons-492000" (driver="qemu2")
	I0831 15:05:34.159105    1510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 15:05:34.159168    1510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 15:05:34.159177    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:34.187052    1510 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 15:05:34.189216    1510 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 15:05:34.189226    1510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/addons for local assets ...
	I0831 15:05:34.189325    1510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/files for local assets ...
	I0831 15:05:34.189358    1510 start.go:296] duration metric: took 30.257125ms for postStartSetup
	I0831 15:05:34.189732    1510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/config.json ...
	I0831 15:05:34.189912    1510 start.go:128] duration metric: took 19.738620167s to createHost
	I0831 15:05:34.189935    1510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:05:34.190021    1510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100dac5a0] 0x100daee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0831 15:05:34.190026    1510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 15:05:34.238237    1510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725141934.004676961
	
	I0831 15:05:34.238245    1510 fix.go:216] guest clock: 1725141934.004676961
	I0831 15:05:34.238250    1510 fix.go:229] Guest: 2024-08-31 15:05:34.004676961 -0700 PDT Remote: 2024-08-31 15:05:34.189915 -0700 PDT m=+19.844429001 (delta=-185.238039ms)
	I0831 15:05:34.238259    1510 fix.go:200] guest clock delta is within tolerance: -185.238039ms
	I0831 15:05:34.238262    1510 start.go:83] releasing machines lock for "addons-492000", held for 19.787010667s
	I0831 15:05:34.238533    1510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 15:05:34.238576    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:34.238533    1510 ssh_runner.go:195] Run: cat /version.json
	I0831 15:05:34.238592    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:34.310080    1510 ssh_runner.go:195] Run: systemctl --version
	I0831 15:05:34.312391    1510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 15:05:34.314472    1510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 15:05:34.314496    1510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 15:05:34.320649    1510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 15:05:34.320656    1510 start.go:495] detecting cgroup driver to use...
	I0831 15:05:34.320753    1510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:05:34.326851    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 15:05:34.330541    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 15:05:34.334277    1510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 15:05:34.334306    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 15:05:34.338046    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:05:34.342005    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 15:05:34.345889    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:05:34.349672    1510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 15:05:34.353653    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 15:05:34.357448    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 15:05:34.361252    1510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 15:05:34.365351    1510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 15:05:34.368582    1510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 15:05:34.371785    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:34.443624    1510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 15:05:34.453979    1510 start.go:495] detecting cgroup driver to use...
	I0831 15:05:34.454055    1510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 15:05:34.460305    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:05:34.465857    1510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 15:05:34.473127    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:05:34.479071    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:05:34.484435    1510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 15:05:34.509198    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:05:34.514779    1510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:05:34.521023    1510 ssh_runner.go:195] Run: which cri-dockerd
	I0831 15:05:34.522342    1510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 15:05:34.525890    1510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0831 15:05:34.531568    1510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 15:05:34.591232    1510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 15:05:34.654884    1510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 15:05:34.654946    1510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 15:05:34.661314    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:34.725970    1510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:05:36.907020    1510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.181085291s)
	I0831 15:05:36.907080    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 15:05:36.912743    1510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0831 15:05:36.919463    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:05:36.925024    1510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 15:05:36.990560    1510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 15:05:37.062150    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:37.120165    1510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 15:05:37.126609    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:05:37.132043    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:37.190246    1510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 15:05:37.215257    1510 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 15:05:37.215347    1510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 15:05:37.218059    1510 start.go:563] Will wait 60s for crictl version
	I0831 15:05:37.218107    1510 ssh_runner.go:195] Run: which crictl
	I0831 15:05:37.219646    1510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 15:05:37.234653    1510 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0831 15:05:37.234720    1510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:05:37.244183    1510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:05:37.264761    1510 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0831 15:05:37.264847    1510 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0831 15:05:37.266364    1510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:05:37.270688    1510 kubeadm.go:883] updating cluster {Name:addons-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:addons-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 15:05:37.270733    1510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:05:37.270774    1510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:05:37.280058    1510 docker.go:685] Got preloaded images: 
	I0831 15:05:37.280067    1510 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0831 15:05:37.280107    1510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:05:37.283429    1510 ssh_runner.go:195] Run: which lz4
	I0831 15:05:37.284861    1510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 15:05:37.286155    1510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 15:05:37.286165    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0831 15:05:38.562367    1510 docker.go:649] duration metric: took 1.277557s to copy over tarball
	I0831 15:05:38.562428    1510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 15:05:39.536959    1510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 15:05:39.552050    1510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:05:39.555483    1510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0831 15:05:39.561199    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:39.627120    1510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:05:41.821049    1510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.193962666s)
	I0831 15:05:41.821157    1510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:05:41.836826    1510 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:05:41.836836    1510 cache_images.go:84] Images are preloaded, skipping loading
	I0831 15:05:41.836841    1510 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0831 15:05:41.836923    1510 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-492000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 15:05:41.837006    1510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 15:05:41.858140    1510 cni.go:84] Creating CNI manager for ""
	I0831 15:05:41.858151    1510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:05:41.858156    1510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 15:05:41.858167    1510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-492000 NodeName:addons-492000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 15:05:41.858227    1510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-492000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 15:05:41.858285    1510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 15:05:41.862329    1510 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 15:05:41.862362    1510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 15:05:41.866050    1510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0831 15:05:41.872138    1510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 15:05:41.878000    1510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0831 15:05:41.884191    1510 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0831 15:05:41.885601    1510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:05:41.889920    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:41.973694    1510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:05:41.980511    1510 certs.go:68] Setting up /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000 for IP: 192.168.105.2
	I0831 15:05:41.980518    1510 certs.go:194] generating shared ca certs ...
	I0831 15:05:41.980528    1510 certs.go:226] acquiring lock for ca certs: {Name:mk9afb7f1a72cfb2bc15164a899be0d76a69591d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:41.980724    1510 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key
	I0831 15:05:42.045261    1510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt ...
	I0831 15:05:42.045272    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt: {Name:mkcd599d95ec6d55602355f7ac4ce0c25f289279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.045578    1510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key ...
	I0831 15:05:42.045582    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key: {Name:mkcf030a6b3d0104cd94aa9e9d38b38d59319495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.045698    1510 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key
	I0831 15:05:42.117331    1510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt ...
	I0831 15:05:42.117334    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt: {Name:mkbc4ca390a5c24602cddfc225bb05156290ef83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.117473    1510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key ...
	I0831 15:05:42.117476    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key: {Name:mkf1622f5d926dd9cf6f9b3767b764bdd13a7c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.117595    1510 certs.go:256] generating profile certs ...
	I0831 15:05:42.117626    1510 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.key
	I0831 15:05:42.117634    1510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt with IP's: []
	I0831 15:05:42.277297    1510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt ...
	I0831 15:05:42.277302    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: {Name:mk0008bccaf87ff06e3ab77abe48f09479624c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.277482    1510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.key ...
	I0831 15:05:42.277485    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.key: {Name:mk6cf95a320299e1ab431a41d5b2c2ab707a6224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.277616    1510 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key.d8ad3cf3
	I0831 15:05:42.277627    1510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt.d8ad3cf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0831 15:05:42.319112    1510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt.d8ad3cf3 ...
	I0831 15:05:42.319119    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt.d8ad3cf3: {Name:mkad24e791ad50fba55d1830321a283ade4dc10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.319266    1510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key.d8ad3cf3 ...
	I0831 15:05:42.319270    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key.d8ad3cf3: {Name:mk9851a19931e8f726a9ce696d4fcb16378943b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.319389    1510 certs.go:381] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt.d8ad3cf3 -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt
	I0831 15:05:42.319520    1510 certs.go:385] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key.d8ad3cf3 -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key
	I0831 15:05:42.319613    1510 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.key
	I0831 15:05:42.319623    1510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.crt with IP's: []
	I0831 15:05:42.368986    1510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.crt ...
	I0831 15:05:42.368992    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.crt: {Name:mkcf267b34fe51404a5719ed87ce0c929fda9d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.369179    1510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.key ...
	I0831 15:05:42.369183    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.key: {Name:mkaa78be695ec7ecadfc51fa2d99d911b3745adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:42.369490    1510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 15:05:42.369518    1510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem (1082 bytes)
	I0831 15:05:42.369542    1510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem (1123 bytes)
	I0831 15:05:42.369565    1510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem (1679 bytes)
	I0831 15:05:42.371197    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 15:05:42.379837    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 15:05:42.387709    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 15:05:42.395388    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 15:05:42.403475    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 15:05:42.411354    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 15:05:42.419117    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 15:05:42.427052    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 15:05:42.435092    1510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 15:05:42.443028    1510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 15:05:42.450053    1510 ssh_runner.go:195] Run: openssl version
	I0831 15:05:42.452369    1510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 15:05:42.455786    1510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:05:42.457170    1510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:05:42.457188    1510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:05:42.459131    1510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 15:05:42.462843    1510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 15:05:42.464231    1510 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 15:05:42.464276    1510 kubeadm.go:392] StartCluster: {Name:addons-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:addons-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:05:42.464341    1510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:05:42.469921    1510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 15:05:42.473795    1510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:05:42.477624    1510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:05:42.481202    1510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 15:05:42.481208    1510 kubeadm.go:157] found existing configuration files:
	
	I0831 15:05:42.481231    1510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 15:05:42.484418    1510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 15:05:42.484443    1510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:05:42.487481    1510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 15:05:42.490528    1510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 15:05:42.490552    1510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:05:42.494131    1510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 15:05:42.497658    1510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 15:05:42.497683    1510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:05:42.501396    1510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 15:05:42.504735    1510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 15:05:42.504754    1510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:05:42.508145    1510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 15:05:42.530499    1510 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 15:05:42.530597    1510 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 15:05:42.572798    1510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 15:05:42.572847    1510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 15:05:42.572910    1510 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 15:05:42.583564    1510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 15:05:42.598731    1510 out.go:235]   - Generating certificates and keys ...
	I0831 15:05:42.598762    1510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 15:05:42.598795    1510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 15:05:42.721667    1510 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 15:05:42.770709    1510 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 15:05:42.922373    1510 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 15:05:43.043185    1510 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 15:05:43.110604    1510 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 15:05:43.110668    1510 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-492000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0831 15:05:43.191956    1510 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 15:05:43.192020    1510 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-492000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0831 15:05:43.335927    1510 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 15:05:43.459362    1510 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 15:05:43.544048    1510 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 15:05:43.544079    1510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 15:05:43.678403    1510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 15:05:43.765961    1510 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 15:05:43.977620    1510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 15:05:44.140483    1510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 15:05:44.404469    1510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 15:05:44.404975    1510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 15:05:44.406684    1510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 15:05:44.411168    1510 out.go:235]   - Booting up control plane ...
	I0831 15:05:44.411225    1510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 15:05:44.411258    1510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 15:05:44.411286    1510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 15:05:44.417683    1510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 15:05:44.420392    1510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 15:05:44.420414    1510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 15:05:44.501004    1510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 15:05:44.501101    1510 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 15:05:45.003402    1510 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.77175ms
	I0831 15:05:45.003633    1510 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 15:05:48.005252    1510 kubeadm.go:310] [api-check] The API server is healthy after 3.002387668s
	I0831 15:05:48.010862    1510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 15:05:48.016585    1510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 15:05:48.023996    1510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 15:05:48.024087    1510 kubeadm.go:310] [mark-control-plane] Marking the node addons-492000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 15:05:48.027326    1510 kubeadm.go:310] [bootstrap-token] Using token: d16fxo.9wsqs4xe55vfs911
	I0831 15:05:48.030820    1510 out.go:235]   - Configuring RBAC rules ...
	I0831 15:05:48.030877    1510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 15:05:48.031677    1510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 15:05:48.038328    1510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 15:05:48.039360    1510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 15:05:48.040389    1510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 15:05:48.042327    1510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 15:05:48.421936    1510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 15:05:48.820275    1510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 15:05:49.416183    1510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 15:05:49.417064    1510 kubeadm.go:310] 
	I0831 15:05:49.417163    1510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 15:05:49.417173    1510 kubeadm.go:310] 
	I0831 15:05:49.417303    1510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 15:05:49.417316    1510 kubeadm.go:310] 
	I0831 15:05:49.417349    1510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 15:05:49.417931    1510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 15:05:49.418064    1510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 15:05:49.418081    1510 kubeadm.go:310] 
	I0831 15:05:49.418167    1510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 15:05:49.418178    1510 kubeadm.go:310] 
	I0831 15:05:49.418249    1510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 15:05:49.418255    1510 kubeadm.go:310] 
	I0831 15:05:49.418332    1510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 15:05:49.418462    1510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 15:05:49.418579    1510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 15:05:49.418593    1510 kubeadm.go:310] 
	I0831 15:05:49.418737    1510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 15:05:49.418885    1510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 15:05:49.418901    1510 kubeadm.go:310] 
	I0831 15:05:49.419056    1510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d16fxo.9wsqs4xe55vfs911 \
	I0831 15:05:49.419237    1510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 \
	I0831 15:05:49.419276    1510 kubeadm.go:310] 	--control-plane 
	I0831 15:05:49.419288    1510 kubeadm.go:310] 
	I0831 15:05:49.419433    1510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 15:05:49.419447    1510 kubeadm.go:310] 
	I0831 15:05:49.419556    1510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d16fxo.9wsqs4xe55vfs911 \
	I0831 15:05:49.419685    1510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 
	I0831 15:05:49.420526    1510 kubeadm.go:310] W0831 22:05:42.295308    1588 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 15:05:49.421228    1510 kubeadm.go:310] W0831 22:05:42.295882    1588 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 15:05:49.421441    1510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 15:05:49.421471    1510 cni.go:84] Creating CNI manager for ""
	I0831 15:05:49.421500    1510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:05:49.426028    1510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 15:05:49.433254    1510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 15:05:49.446288    1510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 15:05:49.463451    1510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 15:05:49.463572    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:49.463604    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-492000 minikube.k8s.io/updated_at=2024_08_31T15_05_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-492000 minikube.k8s.io/primary=true
	I0831 15:05:49.470898    1510 ops.go:34] apiserver oom_adj: -16
	I0831 15:05:49.534355    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:50.036490    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:50.536479    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:51.036482    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:51.535208    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:52.036429    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:52.536449    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:53.036646    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:53.536291    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:54.036397    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:54.536375    1510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:05:54.580265    1510 kubeadm.go:1113] duration metric: took 5.116918s to wait for elevateKubeSystemPrivileges
	I0831 15:05:54.580284    1510 kubeadm.go:394] duration metric: took 12.116292292s to StartCluster
	I0831 15:05:54.580294    1510 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:54.580453    1510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:05:54.580638    1510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:54.580866    1510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 15:05:54.580895    1510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 15:05:54.580878    1510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:05:54.580954    1510 addons.go:69] Setting yakd=true in profile "addons-492000"
	I0831 15:05:54.580976    1510 addons.go:69] Setting registry=true in profile "addons-492000"
	I0831 15:05:54.580987    1510 addons.go:234] Setting addon registry=true in "addons-492000"
	I0831 15:05:54.580992    1510 addons.go:69] Setting ingress=true in profile "addons-492000"
	I0831 15:05:54.581000    1510 addons.go:234] Setting addon ingress=true in "addons-492000"
	I0831 15:05:54.581001    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581015    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581016    1510 addons.go:69] Setting default-storageclass=true in profile "addons-492000"
	I0831 15:05:54.581041    1510 addons.go:69] Setting ingress-dns=true in profile "addons-492000"
	I0831 15:05:54.581051    1510 addons.go:234] Setting addon ingress-dns=true in "addons-492000"
	I0831 15:05:54.581052    1510 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-492000"
	I0831 15:05:54.581062    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581061    1510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-492000"
	I0831 15:05:54.581038    1510 addons.go:69] Setting volumesnapshots=true in profile "addons-492000"
	I0831 15:05:54.581082    1510 addons.go:234] Setting addon volumesnapshots=true in "addons-492000"
	I0831 15:05:54.581091    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581035    1510 config.go:182] Loaded profile config "addons-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:05:54.581054    1510 addons.go:69] Setting gcp-auth=true in profile "addons-492000"
	I0831 15:05:54.581199    1510 mustload.go:65] Loading cluster: addons-492000
	I0831 15:05:54.580966    1510 addons.go:234] Setting addon yakd=true in "addons-492000"
	I0831 15:05:54.581300    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581357    1510 config.go:182] Loaded profile config "addons-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:05:54.580969    1510 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-492000"
	I0831 15:05:54.581376    1510 retry.go:31] will retry after 619.294676ms: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581392    1510 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-492000"
	I0831 15:05:54.581393    1510 retry.go:31] will retry after 1.48712586s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.580967    1510 addons.go:69] Setting metrics-server=true in profile "addons-492000"
	I0831 15:05:54.581403    1510 addons.go:234] Setting addon metrics-server=true in "addons-492000"
	I0831 15:05:54.581411    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581565    1510 retry.go:31] will retry after 1.028772327s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581565    1510 retry.go:31] will retry after 1.053200631s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.580971    1510 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-492000"
	I0831 15:05:54.581576    1510 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-492000"
	I0831 15:05:54.581582    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581094    1510 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-492000"
	I0831 15:05:54.581645    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581700    1510 retry.go:31] will retry after 637.403013ms: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.580972    1510 addons.go:69] Setting volcano=true in profile "addons-492000"
	I0831 15:05:54.581708    1510 addons.go:234] Setting addon volcano=true in "addons-492000"
	I0831 15:05:54.581710    1510 retry.go:31] will retry after 1.273777706s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581715    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581016    1510 addons.go:69] Setting inspektor-gadget=true in profile "addons-492000"
	I0831 15:05:54.581783    1510 addons.go:234] Setting addon inspektor-gadget=true in "addons-492000"
	I0831 15:05:54.581042    1510 addons.go:69] Setting cloud-spanner=true in profile "addons-492000"
	I0831 15:05:54.581808    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.581827    1510 addons.go:234] Setting addon cloud-spanner=true in "addons-492000"
	I0831 15:05:54.581835    1510 retry.go:31] will retry after 1.486356486s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581856    1510 retry.go:31] will retry after 1.474260019s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581862    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.580963    1510 addons.go:69] Setting storage-provisioner=true in profile "addons-492000"
	I0831 15:05:54.581899    1510 addons.go:234] Setting addon storage-provisioner=true in "addons-492000"
	I0831 15:05:54.581919    1510 retry.go:31] will retry after 554.925255ms: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581925    1510 retry.go:31] will retry after 1.138382192s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.581939    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:54.582065    1510 retry.go:31] will retry after 1.435727377s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.582153    1510 retry.go:31] will retry after 1.400543327s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.582154    1510 retry.go:31] will retry after 1.004216521s: connect: dial unix /Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/monitor: connect: connection refused
	I0831 15:05:54.587073    1510 out.go:177] * Verifying Kubernetes components...
	I0831 15:05:54.592987    1510 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 15:05:54.596018    1510 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 15:05:54.596053    1510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:05:54.604031    1510 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 15:05:54.604040    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 15:05:54.604049    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:54.607078    1510 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 15:05:54.610056    1510 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 15:05:54.610063    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 15:05:54.610070    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:54.645280    1510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 15:05:54.693748    1510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:05:54.720011    1510 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 15:05:54.720023    1510 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 15:05:54.726302    1510 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 15:05:54.726311    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 15:05:54.733610    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 15:05:54.770610    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 15:05:54.889593    1510 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0831 15:05:54.890172    1510 node_ready.go:35] waiting up to 6m0s for node "addons-492000" to be "Ready" ...
	I0831 15:05:54.900144    1510 node_ready.go:49] node "addons-492000" has status "Ready":"True"
	I0831 15:05:54.900153    1510 node_ready.go:38] duration metric: took 9.970583ms for node "addons-492000" to be "Ready" ...
	I0831 15:05:54.900157    1510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:05:54.912064    1510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace to be "Ready" ...
	I0831 15:05:55.142290    1510 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0831 15:05:55.147261    1510 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0831 15:05:55.157318    1510 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0831 15:05:55.160712    1510 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 15:05:55.160720    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0831 15:05:55.160730    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.204271    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 15:05:55.208239    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 15:05:55.208252    1510 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 15:05:55.208264    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.222270    1510 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 15:05:55.225289    1510 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 15:05:55.225303    1510 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 15:05:55.225315    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.231350    1510 addons.go:475] Verifying addon registry=true in "addons-492000"
	I0831 15:05:55.235255    1510 out.go:177] * Verifying registry addon...
	I0831 15:05:55.237275    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 15:05:55.242700    1510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 15:05:55.246212    1510 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 15:05:55.246222    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:55.315673    1510 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 15:05:55.315686    1510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 15:05:55.343805    1510 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 15:05:55.343819    1510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 15:05:55.360399    1510 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 15:05:55.360412    1510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 15:05:55.380925    1510 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 15:05:55.380941    1510 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 15:05:55.393702    1510 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-492000" context rescaled to 1 replicas
	I0831 15:05:55.413048    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 15:05:55.413062    1510 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 15:05:55.422944    1510 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 15:05:55.422955    1510 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 15:05:55.432956    1510 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 15:05:55.432968    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 15:05:55.438650    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 15:05:55.446245    1510 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 15:05:55.446262    1510 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 15:05:55.467757    1510 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 15:05:55.467768    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 15:05:55.498757    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 15:05:55.594972    1510 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 15:05:55.599296    1510 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 15:05:55.599306    1510 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 15:05:55.599318    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.612830    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:55.637715    1510 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-492000"
	I0831 15:05:55.637739    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:55.641293    1510 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 15:05:55.645653    1510 out.go:177]   - Using image docker.io/busybox:stable
	I0831 15:05:55.649356    1510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 15:05:55.649367    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 15:05:55.649377    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.725278    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 15:05:55.729279    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 15:05:55.733243    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 15:05:55.737298    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 15:05:55.746329    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 15:05:55.756260    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 15:05:55.765283    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 15:05:55.775316    1510 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 15:05:55.779233    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 15:05:55.779247    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 15:05:55.779258    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.783553    1510 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 15:05:55.783565    1510 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 15:05:55.795048    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:55.840829    1510 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 15:05:55.840839    1510 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 15:05:55.851472    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 15:05:55.861298    1510 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 15:05:55.868277    1510 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 15:05:55.871304    1510 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 15:05:55.871460    1510 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 15:05:55.871468    1510 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 15:05:55.874364    1510 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 15:05:55.874369    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 15:05:55.874376    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:55.943092    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 15:05:55.943102    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 15:05:55.950239    1510 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 15:05:55.950249    1510 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 15:05:55.989075    1510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:05:55.994908    1510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:05:55.994919    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 15:05:55.994928    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:56.021285    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 15:05:56.021298    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 15:05:56.021332    1510 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 15:05:56.025282    1510 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 15:05:56.025294    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 15:05:56.025304    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:56.034712    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 15:05:56.060320    1510 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 15:05:56.064302    1510 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 15:05:56.064312    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 15:05:56.064324    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:56.064590    1510 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 15:05:56.064595    1510 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 15:05:56.073219    1510 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 15:05:56.077236    1510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 15:05:56.077245    1510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 15:05:56.077256    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:56.078248    1510 addons.go:234] Setting addon default-storageclass=true in "addons-492000"
	I0831 15:05:56.078264    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:05:56.078805    1510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 15:05:56.078810    1510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 15:05:56.078814    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:05:56.117407    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 15:05:56.117418    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 15:05:56.141673    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 15:05:56.175199    1510 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 15:05:56.175211    1510 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 15:05:56.185798    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 15:05:56.185808    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 15:05:56.234590    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 15:05:56.237397    1510 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 15:05:56.237404    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 15:05:56.239508    1510 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 15:05:56.239514    1510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 15:05:56.245756    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:56.259121    1510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 15:05:56.259131    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 15:05:56.264764    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:05:56.281594    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 15:05:56.304209    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 15:05:56.304297    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 15:05:56.304303    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 15:05:56.315794    1510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 15:05:56.315808    1510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 15:05:56.402413    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 15:05:56.402426    1510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 15:05:56.404591    1510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 15:05:56.404599    1510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 15:05:56.509220    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 15:05:56.512225    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 15:05:56.512233    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 15:05:56.598019    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 15:05:56.598029    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 15:05:56.647566    1510 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 15:05:56.647581    1510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 15:05:56.737698    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 15:05:56.746184    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:56.971644    1510 pod_ready.go:103] pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace has status "Ready":"False"
	I0831 15:05:57.264702    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:57.854963    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:58.287527    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:58.705556    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.468347875s)
	I0831 15:05:58.705598    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.267012s)
	W0831 15:05:58.705610    1510 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 15:05:58.705625    1510 retry.go:31] will retry after 185.842253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 15:05:58.705650    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.206951708s)
	I0831 15:05:58.705682    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.854261667s)
	I0831 15:05:58.711190    1510 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-492000 service yakd-dashboard -n yakd-dashboard
	
	I0831 15:05:58.787488    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:58.893622    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 15:05:59.191466    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.049845625s)
	I0831 15:05:59.191511    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.956977666s)
	I0831 15:05:59.191525    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.156873625s)
	I0831 15:05:59.191534    1510 addons.go:475] Verifying addon ingress=true in "addons-492000"
	I0831 15:05:59.191612    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.910075334s)
	I0831 15:05:59.191638    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.92689575s)
	I0831 15:05:59.191720    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.88756475s)
	I0831 15:05:59.191754    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.682578291s)
	I0831 15:05:59.191760    1510 addons.go:475] Verifying addon metrics-server=true in "addons-492000"
	I0831 15:05:59.198261    1510 out.go:177] * Verifying ingress addon...
	I0831 15:05:59.207645    1510 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 15:05:59.218627    1510 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 15:05:59.316001    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:05:59.425501    1510 pod_ready.go:103] pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace has status "Ready":"False"
	I0831 15:05:59.504045    1510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.766390417s)
	I0831 15:05:59.504068    1510 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-492000"
	I0831 15:05:59.508191    1510 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 15:05:59.514659    1510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 15:05:59.520871    1510 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 15:05:59.520885    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:05:59.750003    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:00.018050    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:00.310420    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:00.519282    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:00.745573    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:01.018757    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:01.308961    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:01.415251    1510 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-598tf" not found
	I0831 15:06:01.415262    1510 pod_ready.go:82] duration metric: took 6.503337584s for pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace to be "Ready" ...
	E0831 15:06:01.415268    1510 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-598tf" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-598tf" not found
	I0831 15:06:01.415272    1510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dcgdp" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:01.519158    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:01.747905    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:02.019806    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:02.301653    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:02.518876    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:02.745541    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:02.818769    1510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 15:06:02.818786    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:06:02.849601    1510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 15:06:02.856592    1510 addons.go:234] Setting addon gcp-auth=true in "addons-492000"
	I0831 15:06:02.856613    1510 host.go:66] Checking if "addons-492000" exists ...
	I0831 15:06:02.857354    1510 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 15:06:02.857362    1510 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/addons-492000/id_rsa Username:docker}
	I0831 15:06:02.887844    1510 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 15:06:02.891682    1510 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 15:06:02.897806    1510 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 15:06:02.897813    1510 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 15:06:02.903832    1510 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 15:06:02.903839    1510 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 15:06:02.909880    1510 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 15:06:02.909886    1510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 15:06:02.917388    1510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 15:06:02.919273    1510 pod_ready.go:93] pod "coredns-6f6b679f8f-dcgdp" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:02.919280    1510 pod_ready.go:82] duration metric: took 1.504039959s for pod "coredns-6f6b679f8f-dcgdp" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.919284    1510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.921265    1510 pod_ready.go:93] pod "etcd-addons-492000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:02.921270    1510 pod_ready.go:82] duration metric: took 1.983042ms for pod "etcd-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.921274    1510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.924201    1510 pod_ready.go:93] pod "kube-apiserver-addons-492000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:02.924207    1510 pod_ready.go:82] duration metric: took 2.930209ms for pod "kube-apiserver-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.924211    1510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.926447    1510 pod_ready.go:93] pod "kube-controller-manager-addons-492000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:02.926456    1510 pod_ready.go:82] duration metric: took 2.24225ms for pod "kube-controller-manager-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.926460    1510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c9pfm" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.928606    1510 pod_ready.go:93] pod "kube-proxy-c9pfm" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:02.928613    1510 pod_ready.go:82] duration metric: took 2.149792ms for pod "kube-proxy-c9pfm" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:02.928617    1510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:03.019005    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:03.241184    1510 addons.go:475] Verifying addon gcp-auth=true in "addons-492000"
	I0831 15:06:03.246154    1510 out.go:177] * Verifying gcp-auth addon...
	I0831 15:06:03.256563    1510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 15:06:03.312535    1510 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 15:06:03.312819    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:03.318567    1510 pod_ready.go:93] pod "kube-scheduler-addons-492000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:06:03.318577    1510 pod_ready.go:82] duration metric: took 389.966291ms for pod "kube-scheduler-addons-492000" in "kube-system" namespace to be "Ready" ...
	I0831 15:06:03.318580    1510 pod_ready.go:39] duration metric: took 8.418616s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:06:03.318589    1510 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:06:03.318644    1510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:06:03.324840    1510 api_server.go:72] duration metric: took 8.744087042s to wait for apiserver process to appear ...
	I0831 15:06:03.324847    1510 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:06:03.324856    1510 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0831 15:06:03.327863    1510 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0831 15:06:03.328432    1510 api_server.go:141] control plane version: v1.31.0
	I0831 15:06:03.328441    1510 api_server.go:131] duration metric: took 3.591625ms to wait for apiserver health ...
	I0831 15:06:03.328444    1510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 15:06:03.519084    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:03.521650    1510 system_pods.go:59] 17 kube-system pods found
	I0831 15:06:03.521656    1510 system_pods.go:61] "coredns-6f6b679f8f-dcgdp" [56ba471a-a1bd-4d31-ab79-ff6b71deae1d] Running
	I0831 15:06:03.521660    1510 system_pods.go:61] "csi-hostpath-attacher-0" [eaa16d4a-ee3c-48fa-9f62-0a782f5151f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 15:06:03.521663    1510 system_pods.go:61] "csi-hostpath-resizer-0" [3627d990-7957-42ea-a642-f3f5fbd9a56f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 15:06:03.521666    1510 system_pods.go:61] "csi-hostpathplugin-cp7lz" [0cc88a88-660a-472b-9f9a-cf2f55fdd5e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 15:06:03.521669    1510 system_pods.go:61] "etcd-addons-492000" [28080b3e-127a-4dd3-9c2a-f9bfdae164c1] Running
	I0831 15:06:03.521671    1510 system_pods.go:61] "kube-apiserver-addons-492000" [b4f55987-b4a2-47dd-b470-613ee3d3950f] Running
	I0831 15:06:03.521673    1510 system_pods.go:61] "kube-controller-manager-addons-492000" [bf425e59-cb5a-4c0f-8cd0-6c70ac91ef34] Running
	I0831 15:06:03.521677    1510 system_pods.go:61] "kube-ingress-dns-minikube" [58edb9c3-1e48-47b4-98e2-19827c514533] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0831 15:06:03.521679    1510 system_pods.go:61] "kube-proxy-c9pfm" [e4dea774-7ae5-4950-a557-b103730150de] Running
	I0831 15:06:03.521682    1510 system_pods.go:61] "kube-scheduler-addons-492000" [01f5db5d-2c73-4ac5-ac2d-f160ee2ebbd2] Running
	I0831 15:06:03.521685    1510 system_pods.go:61] "metrics-server-84c5f94fbc-p6wqw" [61c9f92c-d1e6-4024-9b33-3c247eca65f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 15:06:03.521688    1510 system_pods.go:61] "nvidia-device-plugin-daemonset-xbb9h" [5401db3a-6b96-4340-9c15-bc04f6727ea6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 15:06:03.521692    1510 system_pods.go:61] "registry-6fb4cdfc84-dbh5s" [f256bb8d-d37d-4371-a5a9-38e94d41cce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 15:06:03.521694    1510 system_pods.go:61] "registry-proxy-7mcwz" [bb84f056-88e3-4c3d-845b-eb2704863b95] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 15:06:03.521696    1510 system_pods.go:61] "snapshot-controller-56fcc65765-2hmhl" [dd2a1d6c-f1dd-4f94-9f43-74240d211026] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 15:06:03.521699    1510 system_pods.go:61] "snapshot-controller-56fcc65765-zpf6n" [67f30329-e856-405c-9bc1-d2170a5ff69a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 15:06:03.521701    1510 system_pods.go:61] "storage-provisioner" [197744a5-f425-4cf8-b155-dc152d9d9a2e] Running
	I0831 15:06:03.521704    1510 system_pods.go:74] duration metric: took 193.261833ms to wait for pod list to return data ...
	I0831 15:06:03.521708    1510 default_sa.go:34] waiting for default service account to be created ...
	I0831 15:06:03.718645    1510 default_sa.go:45] found service account: "default"
	I0831 15:06:03.718666    1510 default_sa.go:55] duration metric: took 196.95575ms for default service account to be created ...
	I0831 15:06:03.718674    1510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 15:06:03.746588    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:03.925848    1510 system_pods.go:86] 17 kube-system pods found
	I0831 15:06:03.925866    1510 system_pods.go:89] "coredns-6f6b679f8f-dcgdp" [56ba471a-a1bd-4d31-ab79-ff6b71deae1d] Running
	I0831 15:06:03.925873    1510 system_pods.go:89] "csi-hostpath-attacher-0" [eaa16d4a-ee3c-48fa-9f62-0a782f5151f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 15:06:03.925880    1510 system_pods.go:89] "csi-hostpath-resizer-0" [3627d990-7957-42ea-a642-f3f5fbd9a56f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 15:06:03.925885    1510 system_pods.go:89] "csi-hostpathplugin-cp7lz" [0cc88a88-660a-472b-9f9a-cf2f55fdd5e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 15:06:03.925890    1510 system_pods.go:89] "etcd-addons-492000" [28080b3e-127a-4dd3-9c2a-f9bfdae164c1] Running
	I0831 15:06:03.925893    1510 system_pods.go:89] "kube-apiserver-addons-492000" [b4f55987-b4a2-47dd-b470-613ee3d3950f] Running
	I0831 15:06:03.925896    1510 system_pods.go:89] "kube-controller-manager-addons-492000" [bf425e59-cb5a-4c0f-8cd0-6c70ac91ef34] Running
	I0831 15:06:03.925900    1510 system_pods.go:89] "kube-ingress-dns-minikube" [58edb9c3-1e48-47b4-98e2-19827c514533] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0831 15:06:03.925903    1510 system_pods.go:89] "kube-proxy-c9pfm" [e4dea774-7ae5-4950-a557-b103730150de] Running
	I0831 15:06:03.925906    1510 system_pods.go:89] "kube-scheduler-addons-492000" [01f5db5d-2c73-4ac5-ac2d-f160ee2ebbd2] Running
	I0831 15:06:03.925911    1510 system_pods.go:89] "metrics-server-84c5f94fbc-p6wqw" [61c9f92c-d1e6-4024-9b33-3c247eca65f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 15:06:03.925917    1510 system_pods.go:89] "nvidia-device-plugin-daemonset-xbb9h" [5401db3a-6b96-4340-9c15-bc04f6727ea6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 15:06:03.925921    1510 system_pods.go:89] "registry-6fb4cdfc84-dbh5s" [f256bb8d-d37d-4371-a5a9-38e94d41cce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 15:06:03.925924    1510 system_pods.go:89] "registry-proxy-7mcwz" [bb84f056-88e3-4c3d-845b-eb2704863b95] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 15:06:03.925938    1510 system_pods.go:89] "snapshot-controller-56fcc65765-2hmhl" [dd2a1d6c-f1dd-4f94-9f43-74240d211026] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 15:06:03.925945    1510 system_pods.go:89] "snapshot-controller-56fcc65765-zpf6n" [67f30329-e856-405c-9bc1-d2170a5ff69a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 15:06:03.925949    1510 system_pods.go:89] "storage-provisioner" [197744a5-f425-4cf8-b155-dc152d9d9a2e] Running
	I0831 15:06:03.925955    1510 system_pods.go:126] duration metric: took 207.280041ms to wait for k8s-apps to be running ...
	I0831 15:06:03.925962    1510 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 15:06:03.926060    1510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 15:06:03.934569    1510 system_svc.go:56] duration metric: took 8.603041ms WaitForService to wait for kubelet
	I0831 15:06:03.934584    1510 kubeadm.go:582] duration metric: took 9.3538455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:06:03.934601    1510 node_conditions.go:102] verifying NodePressure condition ...
	I0831 15:06:04.017583    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:04.120275    1510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 15:06:04.120287    1510 node_conditions.go:123] node cpu capacity is 2
	I0831 15:06:04.120293    1510 node_conditions.go:105] duration metric: took 185.689125ms to run NodePressure ...
	I0831 15:06:04.120299    1510 start.go:241] waiting for startup goroutines ...
	I0831 15:06:04.313180    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:04.519708    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:04.746021    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:05.018844    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:05.245003    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:05.520289    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:05.747261    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:06.021458    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:06.313184    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:06.519711    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:06.745989    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:07.018964    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:07.246056    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:07.518874    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:07.746251    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:08.018780    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:08.245959    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:08.519059    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:08.745896    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 15:06:09.018636    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:09.245772    1510 kapi.go:107] duration metric: took 14.003420666s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 15:06:09.519767    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:10.021802    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:10.518779    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:11.018889    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:11.518611    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:12.018735    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:12.518919    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:13.019053    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:13.519287    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:14.018681    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:14.518580    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:15.035732    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:15.518645    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:16.018557    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:16.518606    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:17.018600    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:17.518614    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:18.018683    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:18.518665    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:19.018582    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:19.518870    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:20.021151    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:20.519618    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:21.018727    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:21.520180    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:22.019466    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:22.518477    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:23.018320    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:23.518436    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:24.018150    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:24.521088    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:25.018185    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:25.518491    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:26.021047    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:26.517389    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:27.020112    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:27.518697    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:28.018559    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:28.518420    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:29.019519    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:29.518229    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:30.018384    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:30.518166    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:31.018069    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:31.518597    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:32.019061    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:32.519494    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:33.019115    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:33.518245    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:34.018773    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:34.518252    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:35.019048    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:35.519660    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:36.019319    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:36.518018    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:37.018198    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:37.518766    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:38.023996    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:38.518883    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:39.018218    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:39.518399    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:40.020818    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:40.518975    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:41.018118    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:41.518073    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:42.018708    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:42.518210    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:43.017965    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:43.518022    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:44.018414    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:44.518313    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:45.017925    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:45.518104    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:46.018599    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:46.518357    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:47.017873    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:47.518406    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:48.018297    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:48.516342    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:49.016820    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:49.517665    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:50.018061    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:50.517875    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:51.018865    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:51.517925    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:52.018440    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:52.517840    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:53.020616    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:53.517868    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:54.019731    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:54.515792    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:55.016053    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:55.517598    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 15:06:56.018477    1510 kapi.go:107] duration metric: took 56.505144292s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 15:07:02.211028    1510 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 15:07:02.211040    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:02.710196    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:03.210242    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:03.713824    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:04.210302    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:04.710335    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:05.212391    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:05.715290    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:06.210105    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:06.710402    1510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 15:07:07.211196    1510 kapi.go:107] duration metric: took 1m8.005149s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 15:07:25.257644    1510 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 15:07:25.257654    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:25.758671    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:26.258521    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:26.758274    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:27.258344    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:27.758259    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:28.258619    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:28.758003    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:29.258538    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:29.758342    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:30.258196    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:30.758210    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:31.258346    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:31.757937    1510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 15:07:32.258360    1510 kapi.go:107] duration metric: took 1m29.003890208s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 15:07:32.262141    1510 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-492000 cluster.
	I0831 15:07:32.269019    1510 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 15:07:32.273474    1510 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 15:07:32.278062    1510 out.go:177] * Enabled addons: ingress-dns, volcano, yakd, storage-provisioner-rancher, cloud-spanner, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0831 15:07:32.281067    1510 addons.go:510] duration metric: took 1m37.702478959s for enable addons: enabled=[ingress-dns volcano yakd storage-provisioner-rancher cloud-spanner nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0831 15:07:32.281080    1510 start.go:246] waiting for cluster config update ...
	I0831 15:07:32.281090    1510 start.go:255] writing updated cluster config ...
	I0831 15:07:32.282723    1510 ssh_runner.go:195] Run: rm -f paused
	I0831 15:07:32.441308    1510 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0831 15:07:32.445997    1510 out.go:201] 
	W0831 15:07:32.450802    1510 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0831 15:07:32.455289    1510 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0831 15:07:32.462054    1510 out.go:177] * Done! kubectl is now configured to use "addons-492000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 31 22:17:18 addons-492000 cri-dockerd[1179]: time="2024-08-31T22:17:18Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Aug 31 22:17:18 addons-492000 dockerd[1285]: time="2024-08-31T22:17:18.900434441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 31 22:17:18 addons-492000 dockerd[1285]: time="2024-08-31T22:17:18.900460428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 31 22:17:18 addons-492000 dockerd[1285]: time="2024-08-31T22:17:18.900474338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:17:18 addons-492000 dockerd[1285]: time="2024-08-31T22:17:18.900502199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:17:22 addons-492000 dockerd[1279]: time="2024-08-31T22:17:22.235651743Z" level=info msg="ignoring event" container=0fbbb94840f7296508a903841a6fccf6234c1585b3d43a090292d99543ee7191 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.235764938Z" level=info msg="shim disconnected" id=0fbbb94840f7296508a903841a6fccf6234c1585b3d43a090292d99543ee7191 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.235813122Z" level=warning msg="cleaning up after shim disconnected" id=0fbbb94840f7296508a903841a6fccf6234c1585b3d43a090292d99543ee7191 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.235817287Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.392578004Z" level=info msg="shim disconnected" id=ec6857b91f36de15e5d499399219ba3da5e33281c565f936c4ae0bb4c99ea35b namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.392613236Z" level=warning msg="cleaning up after shim disconnected" id=ec6857b91f36de15e5d499399219ba3da5e33281c565f936c4ae0bb4c99ea35b namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.392921460Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1279]: time="2024-08-31T22:17:22.392852286Z" level=info msg="ignoring event" container=ec6857b91f36de15e5d499399219ba3da5e33281c565f936c4ae0bb4c99ea35b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.423206443Z" level=info msg="shim disconnected" id=1a69a70f467790f40583cf4b6583c83dcd07d25ff5a212f5e643c49c0f04e9e9 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.423238928Z" level=warning msg="cleaning up after shim disconnected" id=1a69a70f467790f40583cf4b6583c83dcd07d25ff5a212f5e643c49c0f04e9e9 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1279]: time="2024-08-31T22:17:22.424068854Z" level=info msg="ignoring event" container=1a69a70f467790f40583cf4b6583c83dcd07d25ff5a212f5e643c49c0f04e9e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.423243675Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1279]: time="2024-08-31T22:17:22.481801837Z" level=info msg="ignoring event" container=8a7d6a80f4449ab46676f0be37859d9428ae780c5fc983d02751cd254dc49096 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.482268316Z" level=info msg="shim disconnected" id=8a7d6a80f4449ab46676f0be37859d9428ae780c5fc983d02751cd254dc49096 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.482302299Z" level=warning msg="cleaning up after shim disconnected" id=8a7d6a80f4449ab46676f0be37859d9428ae780c5fc983d02751cd254dc49096 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.482309046Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1279]: time="2024-08-31T22:17:22.548439286Z" level=info msg="ignoring event" container=96214ee629c2f4a103a7aaa2abda96090aaca28371802458064c018e27f8bec5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.548513916Z" level=info msg="shim disconnected" id=96214ee629c2f4a103a7aaa2abda96090aaca28371802458064c018e27f8bec5 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.548546442Z" level=warning msg="cleaning up after shim disconnected" id=96214ee629c2f4a103a7aaa2abda96090aaca28371802458064c018e27f8bec5 namespace=moby
	Aug 31 22:17:22 addons-492000 dockerd[1285]: time="2024-08-31T22:17:22.548550857Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	777c191b55027       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                4 seconds ago       Running             nginx                      0                   4a9aa1f695c64       nginx
	6a867708909cc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   00c7f5a621c76       gcp-auth-89d5ffd79-wdfjr
	db794dd61138d       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             9 minutes ago       Running             controller                 0                   216858a39f344       ingress-nginx-controller-bc57996ff-cqr6m
	e788d1229813b       420193b27261a                                                                                                                10 minutes ago      Exited              patch                      1                   6aa69fce92a4d       ingress-nginx-admission-patch-vh6m6
	afdb2e4f3bc37       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                     0                   4674cd526709e       ingress-nginx-admission-create-969c8
	6829996610dab       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Running             cloud-spanner-emulator     0                   355e0ad8d6efd       cloud-spanner-emulator-769b77f747-gsrh2
	840d59d2cc069       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   a050a48526389       local-path-provisioner-86d989889c-n52tb
	a8b2ed44e9901       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     11 minutes ago      Running             nvidia-device-plugin-ctr   0                   d6d73563d3a4c       nvidia-device-plugin-daemonset-xbb9h
	43c357203b6fc       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago      Running             yakd                       0                   ed7a3224d17a6       yakd-dashboard-67d98fc6b-wn4h6
	1a69a70f46779       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy             0                   96214ee629c2f       registry-proxy-7mcwz
	ec6857b91f36d       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             11 minutes ago      Exited              registry                   0                   8a7d6a80f4449       registry-6fb4cdfc84-dbh5s
	5d7dcfe608918       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns       0                   acd077e0df333       kube-ingress-dns-minikube
	bc93fdabde220       ba04bb24b9575                                                                                                                11 minutes ago      Running             storage-provisioner        0                   b1eadf4a3cca4       storage-provisioner
	2d7e2572c7cab       2437cf7621777                                                                                                                11 minutes ago      Running             coredns                    0                   aac0af45edca8       coredns-6f6b679f8f-dcgdp
	b350457b47f5f       71d55d66fd4ee                                                                                                                11 minutes ago      Running             kube-proxy                 0                   6f52eed1adc98       kube-proxy-c9pfm
	e05eb9bbd5a03       fbbbd428abb4d                                                                                                                11 minutes ago      Running             kube-scheduler             0                   b413b5f39df80       kube-scheduler-addons-492000
	30c5b2e67a2f9       27e3830e14027                                                                                                                11 minutes ago      Running             etcd                       0                   e059c93f39e18       etcd-addons-492000
	01e29b6f418ce       fcb0683e6bdbd                                                                                                                11 minutes ago      Running             kube-controller-manager    0                   d2f420981ec1c       kube-controller-manager-addons-492000
	a5f317a16d699       cd0f0ae0ec9e0                                                                                                                11 minutes ago      Running             kube-apiserver             0                   c2740ea2220dc       kube-apiserver-addons-492000
	
	
	==> controller_ingress [db794dd61138] <==
	I0831 22:07:28.288254       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2d31b580-a798-4b51-9222-103e5b8dbfc8", APIVersion:"v1", ResourceVersion:"761", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0831 22:07:28.288262       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"56379ba5-acca-4a54-ba1d-8c66a4604f85", APIVersion:"v1", ResourceVersion:"764", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0831 22:07:29.482630       7 nginx.go:317] "Starting NGINX process"
	I0831 22:07:29.482714       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0831 22:07:29.482821       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0831 22:07:29.482892       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0831 22:07:29.490704       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0831 22:07:29.490908       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-cqr6m"
	I0831 22:07:29.505877       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-cqr6m" node="addons-492000"
	I0831 22:07:29.518902       7 controller.go:213] "Backend successfully reloaded"
	I0831 22:07:29.518952       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0831 22:07:29.519199       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-cqr6m", UID:"bf42cf20-33e6-44ef-8dd0-63dea4f21fce", APIVersion:"v1", ResourceVersion:"1335", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0831 22:17:15.413677       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0831 22:17:15.422925       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.009s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.009s testedConfigurationSize:18.1kB}
	I0831 22:17:15.422942       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0831 22:17:15.426106       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0831 22:17:15.426407       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0831 22:17:15.426453       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0831 22:17:15.427861       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"20441af2-91de-495b-9e51-7f608cadf6d2", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2661", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0831 22:17:15.447706       7 controller.go:213] "Backend successfully reloaded"
	I0831 22:17:15.448018       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-cqr6m", UID:"bf42cf20-33e6-44ef-8dd0-63dea4f21fce", APIVersion:"v1", ResourceVersion:"1335", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0831 22:17:18.760089       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0831 22:17:18.760152       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0831 22:17:18.868599       7 controller.go:213] "Backend successfully reloaded"
	I0831 22:17:18.868886       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-cqr6m", UID:"bf42cf20-33e6-44ef-8dd0-63dea4f21fce", APIVersion:"v1", ResourceVersion:"1335", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [2d7e2572c7ca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:58535 - 46229 "HINFO IN 5418551689093171039.8736883536661458860. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011533172s
	[INFO] 10.244.0.25:45743 - 26571 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000129719s
	[INFO] 10.244.0.25:38278 - 47261 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000270566s
	[INFO] 10.244.0.25:45526 - 22936 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000051354s
	[INFO] 10.244.0.25:51532 - 53290 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000034056s
	[INFO] 10.244.0.25:41993 - 27148 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031513s
	[INFO] 10.244.0.25:49484 - 60655 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029887s
	[INFO] 10.244.0.25:52801 - 27133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001274467s
	[INFO] 10.244.0.25:53100 - 45434 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002422842s
	
	
	==> describe nodes <==
	Name:               addons-492000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-492000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-492000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T15_05_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-492000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-492000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:17:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:13:26 +0000   Sat, 31 Aug 2024 22:05:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:13:26 +0000   Sat, 31 Aug 2024 22:05:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:13:26 +0000   Sat, 31 Aug 2024 22:05:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:13:26 +0000   Sat, 31 Aug 2024 22:05:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-492000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 f35de5166cbb4de3a2df9106c4dee883
	  System UUID:                f35de5166cbb4de3a2df9106c4dee883
	  Boot ID:                    cd1d7565-3d48-4719-b0b4-f7975d7fd69f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  default                     cloud-spanner-emulator-769b77f747-gsrh2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  gcp-auth                    gcp-auth-89d5ffd79-wdfjr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-cqr6m    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-dcgdp                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-492000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-492000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-492000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-c9pfm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-492000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-xbb9h        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-n52tb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-wn4h6              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-492000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-492000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-492000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-492000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-492000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-492000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-492000 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-492000 event: Registered Node addons-492000 in Controller
	
	
	==> dmesg <==
	[  +7.630901] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.916901] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.675696] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.092380] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.753178] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.371112] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.136482] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.821823] kauditd_printk_skb: 7 callbacks suppressed
	[Aug31 22:07] kauditd_printk_skb: 87 callbacks suppressed
	[ +13.549093] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.802090] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.628459] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.582347] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.610509] kauditd_printk_skb: 7 callbacks suppressed
	[Aug31 22:08] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.367136] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.504633] kauditd_printk_skb: 2 callbacks suppressed
	[Aug31 22:12] kauditd_printk_skb: 2 callbacks suppressed
	[Aug31 22:16] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.924351] kauditd_printk_skb: 19 callbacks suppressed
	[ +17.339959] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.621030] kauditd_printk_skb: 33 callbacks suppressed
	[Aug31 22:17] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.252100] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.053379] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [30c5b2e67a2f] <==
	{"level":"info","ts":"2024-08-31T22:05:45.917588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:05:45.917717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:05:45.917750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-08-31T22:05:45.917786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:05:45.917804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-08-31T22:05:45.917830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:05:45.917860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-08-31T22:05:45.918589Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-492000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:05:45.918634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:05:45.918665Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:05:45.919573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:05:45.920129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-08-31T22:05:45.918702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:05:45.918792Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:05:45.920371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:05:45.920676Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:05:45.921151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:05:45.922229Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:05:45.922259Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:05:45.922269Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-08-31T22:07:28.093013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.622327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:07:28.093169Z","caller":"traceutil/trace.go:171","msg":"trace[1317611377] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:1356; }","duration":"111.776721ms","start":"2024-08-31T22:07:27.981363Z","end":"2024-08-31T22:07:28.093140Z","steps":["trace[1317611377] 'count revisions from in-memory index tree'  (duration: 111.593524ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:15:45.953621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1880}
	{"level":"info","ts":"2024-08-31T22:15:46.058658Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1880,"took":"102.243277ms","hash":825096190,"current-db-size-bytes":9048064,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4952064,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-31T22:15:46.059116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":825096190,"revision":1880,"compact-revision":-1}
	
	
	==> gcp-auth [6a867708909c] <==
	2024/08/31 22:07:31 GCP Auth Webhook started!
	2024/08/31 22:07:48 Ready to marshal response ...
	2024/08/31 22:07:48 Ready to write response ...
	2024/08/31 22:07:49 Ready to marshal response ...
	2024/08/31 22:07:49 Ready to write response ...
	2024/08/31 22:08:10 Ready to marshal response ...
	2024/08/31 22:08:10 Ready to write response ...
	2024/08/31 22:08:11 Ready to marshal response ...
	2024/08/31 22:08:11 Ready to write response ...
	2024/08/31 22:08:11 Ready to marshal response ...
	2024/08/31 22:08:11 Ready to write response ...
	2024/08/31 22:16:22 Ready to marshal response ...
	2024/08/31 22:16:22 Ready to write response ...
	2024/08/31 22:16:25 Ready to marshal response ...
	2024/08/31 22:16:25 Ready to write response ...
	2024/08/31 22:16:44 Ready to marshal response ...
	2024/08/31 22:16:44 Ready to write response ...
	2024/08/31 22:17:15 Ready to marshal response ...
	2024/08/31 22:17:15 Ready to write response ...
	
	
	==> kernel <==
	 22:17:22 up 11 min,  0 users,  load average: 0.30, 0.60, 0.46
	Linux addons-492000 5.10.207 #1 SMP PREEMPT Wed Aug 28 18:03:56 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a5f317a16d69] <==
	W0831 22:08:02.132675       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0831 22:08:02.333233       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0831 22:08:02.378085       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0831 22:08:02.384873       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0831 22:08:02.477080       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0831 22:08:02.541722       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0831 22:08:02.608407       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0831 22:16:33.357740       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:16:59.349713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:16:59.349741       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:16:59.357612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:16:59.357630       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:16:59.373306       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:16:59.373452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:16:59.378499       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:16:59.378527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:16:59.388363       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:16:59.388383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:17:00.374839       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:17:00.388968       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0831 22:17:00.483635       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0831 22:17:10.093630       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:17:11.203873       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:17:15.423361       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:17:15.523250       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.56.97"}
	
	
	==> kube-controller-manager [01e29b6f418c] <==
	W0831 22:17:08.163358       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:08.163423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:09.407184       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:09.407217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:09.644324       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:09.644444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:09.691600       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:09.691661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0831 22:17:11.205178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:12.696989       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:12.697108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:14.728828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:14.728949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:17.551210       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:17.551235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:17.861577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:17.861597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:19.425485       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:19.425513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:17:20.178382       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0831 22:17:20.217246       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:20.217352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:21.907247       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:21.907361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:17:22.367866       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.748µs"
	
	
	==> kube-proxy [b350457b47f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:05:54.863632       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:05:54.868982       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0831 22:05:54.869215       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:05:54.888058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:05:54.888086       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:05:54.888100       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:05:54.889436       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:05:54.889548       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:05:54.889554       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:05:54.890541       1 config.go:197] "Starting service config controller"
	I0831 22:05:54.890549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:05:54.890558       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:05:54.890560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:05:54.890733       1 config.go:326] "Starting node config controller"
	I0831 22:05:54.890735       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:05:54.991230       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:05:54.991251       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:05:54.991267       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e05eb9bbd5a0] <==
	W0831 22:05:46.488314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:05:46.488346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488380       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:05:46.488388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:05:46.488444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:05:46.488499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:05:46.488536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:05:46.488613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:46.488900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:05:46.488923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:47.304137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:05:47.304242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:47.361836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 22:05:47.361888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:47.499699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:05:47.499746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:47.513070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:05:47.513103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:05:47.571807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:05:47.571880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:05:47.886859       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497615    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f30329-e856-405c-9bc1-d2170a5ff69a" containerName="volume-snapshot-controller"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497618    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc88a88-660a-472b-9f9a-cf2f55fdd5e6" containerName="node-driver-registrar"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497621    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc88a88-660a-472b-9f9a-cf2f55fdd5e6" containerName="csi-provisioner"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497623    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7e73c3-2273-4a99-8671-c4f995870bdf" containerName="gadget"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497626    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7e73c3-2273-4a99-8671-c4f995870bdf" containerName="gadget"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497629    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc88a88-660a-472b-9f9a-cf2f55fdd5e6" containerName="hostpath"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497631    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a1d6c-f1dd-4f94-9f43-74240d211026" containerName="volume-snapshot-controller"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497634    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc88a88-660a-472b-9f9a-cf2f55fdd5e6" containerName="csi-external-health-monitor-controller"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.497637    2041 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc88a88-660a-472b-9f9a-cf2f55fdd5e6" containerName="liveness-probe"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.668481    2041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krxdb\" (UniqueName: \"kubernetes.io/projected/e303b475-85d7-47f1-ba2e-8f3cd8610c93-kube-api-access-krxdb\") pod \"nginx\" (UID: \"e303b475-85d7-47f1-ba2e-8f3cd8610c93\") " pod="default/nginx"
	Aug 31 22:17:15 addons-492000 kubelet[2041]: I0831 22:17:15.668527    2041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e303b475-85d7-47f1-ba2e-8f3cd8610c93-gcp-creds\") pod \"nginx\" (UID: \"e303b475-85d7-47f1-ba2e-8f3cd8610c93\") " pod="default/nginx"
	Aug 31 22:17:17 addons-492000 kubelet[2041]: E0831 22:17:17.456387    2041 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="17d287e6-ce43-4610-aa99-6ee466611390"
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.169891    2041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=4.3371064839999995 podStartE2EDuration="7.169874914s" podCreationTimestamp="2024-08-31 22:17:15 +0000 UTC" firstStartedPulling="2024-08-31 22:17:15.90557952 +0000 UTC m=+687.495048095" lastFinishedPulling="2024-08-31 22:17:18.73834795 +0000 UTC m=+690.327816525" observedRunningTime="2024-08-31 22:17:19.356932331 +0000 UTC m=+690.946400905" watchObservedRunningTime="2024-08-31 22:17:22.169874914 +0000 UTC m=+693.759343488"
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.442637    2041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17d287e6-ce43-4610-aa99-6ee466611390-gcp-creds\") pod \"17d287e6-ce43-4610-aa99-6ee466611390\" (UID: \"17d287e6-ce43-4610-aa99-6ee466611390\") "
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.442663    2041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdhx\" (UniqueName: \"kubernetes.io/projected/17d287e6-ce43-4610-aa99-6ee466611390-kube-api-access-htdhx\") pod \"17d287e6-ce43-4610-aa99-6ee466611390\" (UID: \"17d287e6-ce43-4610-aa99-6ee466611390\") "
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.442684    2041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17d287e6-ce43-4610-aa99-6ee466611390-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "17d287e6-ce43-4610-aa99-6ee466611390" (UID: "17d287e6-ce43-4610-aa99-6ee466611390"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.446408    2041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d287e6-ce43-4610-aa99-6ee466611390-kube-api-access-htdhx" (OuterVolumeSpecName: "kube-api-access-htdhx") pod "17d287e6-ce43-4610-aa99-6ee466611390" (UID: "17d287e6-ce43-4610-aa99-6ee466611390"). InnerVolumeSpecName "kube-api-access-htdhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.542820    2041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tp7g\" (UniqueName: \"kubernetes.io/projected/f256bb8d-d37d-4371-a5a9-38e94d41cce2-kube-api-access-6tp7g\") pod \"f256bb8d-d37d-4371-a5a9-38e94d41cce2\" (UID: \"f256bb8d-d37d-4371-a5a9-38e94d41cce2\") "
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.542932    2041 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17d287e6-ce43-4610-aa99-6ee466611390-gcp-creds\") on node \"addons-492000\" DevicePath \"\""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.542953    2041 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-htdhx\" (UniqueName: \"kubernetes.io/projected/17d287e6-ce43-4610-aa99-6ee466611390-kube-api-access-htdhx\") on node \"addons-492000\" DevicePath \"\""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.543796    2041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f256bb8d-d37d-4371-a5a9-38e94d41cce2-kube-api-access-6tp7g" (OuterVolumeSpecName: "kube-api-access-6tp7g") pod "f256bb8d-d37d-4371-a5a9-38e94d41cce2" (UID: "f256bb8d-d37d-4371-a5a9-38e94d41cce2"). InnerVolumeSpecName "kube-api-access-6tp7g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.643147    2041 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6tp7g\" (UniqueName: \"kubernetes.io/projected/f256bb8d-d37d-4371-a5a9-38e94d41cce2-kube-api-access-6tp7g\") on node \"addons-492000\" DevicePath \"\""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.744228    2041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqt5\" (UniqueName: \"kubernetes.io/projected/bb84f056-88e3-4c3d-845b-eb2704863b95-kube-api-access-hkqt5\") pod \"bb84f056-88e3-4c3d-845b-eb2704863b95\" (UID: \"bb84f056-88e3-4c3d-845b-eb2704863b95\") "
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.744633    2041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb84f056-88e3-4c3d-845b-eb2704863b95-kube-api-access-hkqt5" (OuterVolumeSpecName: "kube-api-access-hkqt5") pod "bb84f056-88e3-4c3d-845b-eb2704863b95" (UID: "bb84f056-88e3-4c3d-845b-eb2704863b95"). InnerVolumeSpecName "kube-api-access-hkqt5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:17:22 addons-492000 kubelet[2041]: I0831 22:17:22.844906    2041 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hkqt5\" (UniqueName: \"kubernetes.io/projected/bb84f056-88e3-4c3d-845b-eb2704863b95-kube-api-access-hkqt5\") on node \"addons-492000\" DevicePath \"\""
	
	
	==> storage-provisioner [bc93fdabde22] <==
	I0831 22:05:58.674428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:05:58.864039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:05:58.864064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:05:58.885251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:05:58.885623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-492000_ac436aff-ee8f-4e8f-b578-23552e0a25f1!
	I0831 22:05:58.888963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5250d07-200d-466f-a5f2-12db8be7b0d4", APIVersion:"v1", ResourceVersion:"771", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-492000_ac436aff-ee8f-4e8f-b578-23552e0a25f1 became leader
	I0831 22:05:59.039752       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-492000_ac436aff-ee8f-4e8f-b578-23552e0a25f1!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-492000 -n addons-492000
helpers_test.go:262: (dbg) Run:  kubectl --context addons-492000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox ingress-nginx-admission-create-969c8 ingress-nginx-admission-patch-vh6m6 registry-6fb4cdfc84-dbh5s registry-proxy-7mcwz
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-492000 describe pod busybox ingress-nginx-admission-create-969c8 ingress-nginx-admission-patch-vh6m6 registry-6fb4cdfc84-dbh5s registry-proxy-7mcwz
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context addons-492000 describe pod busybox ingress-nginx-admission-create-969c8 ingress-nginx-admission-patch-vh6m6 registry-6fb4cdfc84-dbh5s registry-proxy-7mcwz: exit status 1 (42.816542ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-492000/192.168.105.2
	Start Time:       Sat, 31 Aug 2024 15:08:11 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qh5gq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qh5gq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-492000
	  Normal   Pulling    7m41s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-969c8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vh6m6" not found
	Error from server (NotFound): pods "registry-6fb4cdfc84-dbh5s" not found
	Error from server (NotFound): pods "registry-proxy-7mcwz" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context addons-492000 describe pod busybox ingress-nginx-admission-create-969c8 ingress-nginx-admission-patch-vh6m6 registry-6fb4cdfc84-dbh5s registry-proxy-7mcwz: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.26s)

                                                
                                    
x
+
TestCertOptions (10.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-731000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-731000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.026200375s)

                                                
                                                
-- stdout --
	* [cert-options-731000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-731000" primary control-plane node in "cert-options-731000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-731000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-731000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-731000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-731000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-731000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.410458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-731000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-731000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-731000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-731000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-731000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-731000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.978417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-731000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-731000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-731000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-731000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-731000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-31 15:51:17.650036 -0700 PDT m=+2784.592483418
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-731000 -n cert-options-731000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-731000 -n cert-options-731000: exit status 7 (30.925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "cert-options-731000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "cert-options-731000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-731000
--- FAIL: TestCertOptions (10.29s)

                                                
                                    
x
+
TestCertExpiration (195.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.128001125s)

                                                
                                                
-- stdout --
	* [cert-expiration-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-889000" primary control-plane node in "cert-expiration-889000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-889000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230961625s)

                                                
                                                
-- stdout --
	* [cert-expiration-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-889000" primary control-plane node in "cert-expiration-889000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-889000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-889000" primary control-plane node in "cert-expiration-889000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-31 15:54:17.541111 -0700 PDT m=+2964.505421960
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-889000 -n cert-expiration-889000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-889000 -n cert-expiration-889000: exit status 7 (58.496542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "cert-expiration-889000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "cert-expiration-889000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-889000
--- FAIL: TestCertExpiration (195.50s)

                                                
                                    
x
+
TestDockerFlags (10.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-502000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-502000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.313694083s)

                                                
                                                
-- stdout --
	* [docker-flags-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-502000" primary control-plane node in "docker-flags-502000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-502000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:50:56.952923    4221 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:50:56.953146    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:56.953149    4221 out.go:358] Setting ErrFile to fd 2...
	I0831 15:50:56.953152    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:56.953280    4221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:50:56.954322    4221 out.go:352] Setting JSON to false
	I0831 15:50:56.970271    4221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3020,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:50:56.970375    4221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:50:56.975543    4221 out.go:177] * [docker-flags-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:50:56.983569    4221 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:50:56.983645    4221 notify.go:220] Checking for updates...
	I0831 15:50:56.990441    4221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:50:56.993475    4221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:50:56.996575    4221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:50:56.999428    4221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:50:57.002456    4221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:50:57.005752    4221 config.go:182] Loaded profile config "force-systemd-flag-751000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:57.005822    4221 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:57.005867    4221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:50:57.009363    4221 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:50:57.016508    4221 start.go:297] selected driver: qemu2
	I0831 15:50:57.016515    4221 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:50:57.016522    4221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:50:57.018730    4221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:50:57.022418    4221 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:50:57.025639    4221 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0831 15:50:57.025667    4221 cni.go:84] Creating CNI manager for ""
	I0831 15:50:57.025676    4221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:50:57.025680    4221 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:50:57.025715    4221 start.go:340] cluster config:
	{Name:docker-flags-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:50:57.029445    4221 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:57.038473    4221 out.go:177] * Starting "docker-flags-502000" primary control-plane node in "docker-flags-502000" cluster
	I0831 15:50:57.042526    4221 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:50:57.042541    4221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:50:57.042555    4221 cache.go:56] Caching tarball of preloaded images
	I0831 15:50:57.042643    4221 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:50:57.042656    4221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:50:57.042721    4221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/docker-flags-502000/config.json ...
	I0831 15:50:57.042736    4221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/docker-flags-502000/config.json: {Name:mk561ee8a2dff2cfb471a0512fedef8267a723a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:50:57.042973    4221 start.go:360] acquireMachinesLock for docker-flags-502000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:57.043010    4221 start.go:364] duration metric: took 30.708µs to acquireMachinesLock for "docker-flags-502000"
	I0831 15:50:57.043022    4221 start.go:93] Provisioning new machine with config: &{Name:docker-flags-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:57.043080    4221 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:57.051495    4221 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:57.069802    4221 start.go:159] libmachine.API.Create for "docker-flags-502000" (driver="qemu2")
	I0831 15:50:57.069837    4221 client.go:168] LocalClient.Create starting
	I0831 15:50:57.069902    4221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:57.069931    4221 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:57.069943    4221 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:57.069995    4221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:57.070021    4221 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:57.070029    4221 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:57.070406    4221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:57.232272    4221 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:57.287303    4221 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:57.287308    4221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:57.287496    4221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:50:57.296641    4221 main.go:141] libmachine: STDOUT: 
	I0831 15:50:57.296662    4221 main.go:141] libmachine: STDERR: 
	I0831 15:50:57.296709    4221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2 +20000M
	I0831 15:50:57.304542    4221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:57.304557    4221 main.go:141] libmachine: STDERR: 
	I0831 15:50:57.304568    4221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:50:57.304573    4221 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:57.304588    4221 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:57.304617    4221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:f7:7e:51:80:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:50:57.306184    4221 main.go:141] libmachine: STDOUT: 
	I0831 15:50:57.306199    4221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:57.306226    4221 client.go:171] duration metric: took 236.383417ms to LocalClient.Create
	I0831 15:50:59.308365    4221 start.go:128] duration metric: took 2.265315208s to createHost
	I0831 15:50:59.308437    4221 start.go:83] releasing machines lock for "docker-flags-502000", held for 2.2654735s
	W0831 15:50:59.308483    4221 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:59.325863    4221 out.go:177] * Deleting "docker-flags-502000" in qemu2 ...
	W0831 15:50:59.355824    4221 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:59.355845    4221 start.go:729] Will try again in 5 seconds ...
	I0831 15:51:04.357925    4221 start.go:360] acquireMachinesLock for docker-flags-502000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:51:04.701852    4221 start.go:364] duration metric: took 343.798708ms to acquireMachinesLock for "docker-flags-502000"
	I0831 15:51:04.701977    4221 start.go:93] Provisioning new machine with config: &{Name:docker-flags-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:51:04.702311    4221 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:51:04.713000    4221 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:51:04.761759    4221 start.go:159] libmachine.API.Create for "docker-flags-502000" (driver="qemu2")
	I0831 15:51:04.761802    4221 client.go:168] LocalClient.Create starting
	I0831 15:51:04.761936    4221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:51:04.762007    4221 main.go:141] libmachine: Decoding PEM data...
	I0831 15:51:04.762023    4221 main.go:141] libmachine: Parsing certificate...
	I0831 15:51:04.762085    4221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:51:04.762129    4221 main.go:141] libmachine: Decoding PEM data...
	I0831 15:51:04.762141    4221 main.go:141] libmachine: Parsing certificate...
	I0831 15:51:04.762709    4221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:51:04.936742    4221 main.go:141] libmachine: Creating SSH key...
	I0831 15:51:05.159790    4221 main.go:141] libmachine: Creating Disk image...
	I0831 15:51:05.159802    4221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:51:05.160063    4221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:51:05.170041    4221 main.go:141] libmachine: STDOUT: 
	I0831 15:51:05.170060    4221 main.go:141] libmachine: STDERR: 
	I0831 15:51:05.170106    4221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2 +20000M
	I0831 15:51:05.177966    4221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:51:05.177981    4221 main.go:141] libmachine: STDERR: 
	I0831 15:51:05.177995    4221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:51:05.178000    4221 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:51:05.178015    4221 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:51:05.178047    4221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:9e:c1:03:b6:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/docker-flags-502000/disk.qcow2
	I0831 15:51:05.179636    4221 main.go:141] libmachine: STDOUT: 
	I0831 15:51:05.179654    4221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:51:05.179668    4221 client.go:171] duration metric: took 417.872084ms to LocalClient.Create
	I0831 15:51:07.181809    4221 start.go:128] duration metric: took 2.479530833s to createHost
	I0831 15:51:07.181888    4221 start.go:83] releasing machines lock for "docker-flags-502000", held for 2.480027666s
	W0831 15:51:07.182342    4221 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-502000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-502000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:51:07.205896    4221 out.go:201] 
	W0831 15:51:07.210993    4221 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:51:07.211020    4221 out.go:270] * 
	* 
	W0831 15:51:07.214052    4221 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:51:07.223874    4221 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-502000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-502000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-502000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.656666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-502000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-502000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-502000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-502000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-502000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-502000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-502000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-502000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-502000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.710166ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-502000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-502000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-502000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-502000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-502000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-502000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-31 15:51:07.36376 -0700 PDT m=+2774.305950668
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-502000 -n docker-flags-502000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-502000 -n docker-flags-502000: exit status 7 (29.134583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "docker-flags-502000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "docker-flags-502000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-502000
--- FAIL: TestDockerFlags (10.54s)

                                                
                                    
x
+
TestForceSystemdFlag (10.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-751000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-751000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.100037583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-751000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-751000" primary control-plane node in "force-systemd-flag-751000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-751000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:50:51.928701    4200 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:50:51.928833    4200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:51.928837    4200 out.go:358] Setting ErrFile to fd 2...
	I0831 15:50:51.928839    4200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:51.928971    4200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:50:51.930023    4200 out.go:352] Setting JSON to false
	I0831 15:50:51.946088    4200 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3015,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:50:51.946160    4200 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:50:51.952739    4200 out.go:177] * [force-systemd-flag-751000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:50:51.960690    4200 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:50:51.960774    4200 notify.go:220] Checking for updates...
	I0831 15:50:51.968636    4200 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:50:51.972677    4200 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:50:51.976670    4200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:50:51.979639    4200 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:50:51.982674    4200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:50:51.985994    4200 config.go:182] Loaded profile config "force-systemd-env-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:51.986071    4200 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:51.986117    4200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:50:51.990669    4200 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:50:51.997698    4200 start.go:297] selected driver: qemu2
	I0831 15:50:51.997703    4200 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:50:51.997709    4200 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:50:52.000168    4200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:50:52.003639    4200 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:50:52.006731    4200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:50:52.006745    4200 cni.go:84] Creating CNI manager for ""
	I0831 15:50:52.006752    4200 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:50:52.006759    4200 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:50:52.006782    4200 start.go:340] cluster config:
	{Name:force-systemd-flag-751000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-751000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:50:52.010472    4200 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:52.018618    4200 out.go:177] * Starting "force-systemd-flag-751000" primary control-plane node in "force-systemd-flag-751000" cluster
	I0831 15:50:52.022649    4200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:50:52.022662    4200 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:50:52.022695    4200 cache.go:56] Caching tarball of preloaded images
	I0831 15:50:52.022753    4200 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:50:52.022759    4200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:50:52.022819    4200 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/force-systemd-flag-751000/config.json ...
	I0831 15:50:52.022830    4200 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/force-systemd-flag-751000/config.json: {Name:mkc38b5753b1c33b704a4d9c3ebdfed9f9339d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:50:52.023058    4200 start.go:360] acquireMachinesLock for force-systemd-flag-751000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:52.023098    4200 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "force-systemd-flag-751000"
	I0831 15:50:52.023110    4200 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-751000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-751000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:52.023149    4200 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:52.031725    4200 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:52.049811    4200 start.go:159] libmachine.API.Create for "force-systemd-flag-751000" (driver="qemu2")
	I0831 15:50:52.049838    4200 client.go:168] LocalClient.Create starting
	I0831 15:50:52.049903    4200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:52.049938    4200 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:52.049947    4200 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:52.049985    4200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:52.050008    4200 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:52.050019    4200 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:52.050392    4200 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:52.213035    4200 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:52.333632    4200 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:52.333638    4200 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:52.333836    4200 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:52.343111    4200 main.go:141] libmachine: STDOUT: 
	I0831 15:50:52.343134    4200 main.go:141] libmachine: STDERR: 
	I0831 15:50:52.343188    4200 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2 +20000M
	I0831 15:50:52.351016    4200 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:52.351029    4200 main.go:141] libmachine: STDERR: 
	I0831 15:50:52.351044    4200 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:52.351048    4200 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:52.351059    4200 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:52.351087    4200 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c9:23:be:40:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:52.352659    4200 main.go:141] libmachine: STDOUT: 
	I0831 15:50:52.352672    4200 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:52.352691    4200 client.go:171] duration metric: took 302.855792ms to LocalClient.Create
	I0831 15:50:54.354816    4200 start.go:128] duration metric: took 2.331704042s to createHost
	I0831 15:50:54.354874    4200 start.go:83] releasing machines lock for "force-systemd-flag-751000", held for 2.331825042s
	W0831 15:50:54.354983    4200 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:54.384162    4200 out.go:177] * Deleting "force-systemd-flag-751000" in qemu2 ...
	W0831 15:50:54.409721    4200 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:54.409739    4200 start.go:729] Will try again in 5 seconds ...
	I0831 15:50:59.411820    4200 start.go:360] acquireMachinesLock for force-systemd-flag-751000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:59.412320    4200 start.go:364] duration metric: took 362.125µs to acquireMachinesLock for "force-systemd-flag-751000"
	I0831 15:50:59.412486    4200 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-751000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-751000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:59.412766    4200 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:59.420777    4200 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:59.472734    4200 start.go:159] libmachine.API.Create for "force-systemd-flag-751000" (driver="qemu2")
	I0831 15:50:59.472775    4200 client.go:168] LocalClient.Create starting
	I0831 15:50:59.472899    4200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:59.472976    4200 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:59.472994    4200 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:59.473047    4200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:59.473096    4200 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:59.473107    4200 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:59.473669    4200 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:59.658311    4200 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:59.929855    4200 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:59.929867    4200 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:59.930077    4200 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:59.939837    4200 main.go:141] libmachine: STDOUT: 
	I0831 15:50:59.939865    4200 main.go:141] libmachine: STDERR: 
	I0831 15:50:59.939919    4200 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2 +20000M
	I0831 15:50:59.947792    4200 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:59.947807    4200 main.go:141] libmachine: STDERR: 
	I0831 15:50:59.947819    4200 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:59.947824    4200 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:59.947836    4200 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:59.947870    4200 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ee:63:81:16:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-flag-751000/disk.qcow2
	I0831 15:50:59.949488    4200 main.go:141] libmachine: STDOUT: 
	I0831 15:50:59.949503    4200 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:59.949517    4200 client.go:171] duration metric: took 476.74925ms to LocalClient.Create
	I0831 15:51:01.951686    4200 start.go:128] duration metric: took 2.538931458s to createHost
	I0831 15:51:01.951757    4200 start.go:83] releasing machines lock for "force-systemd-flag-751000", held for 2.539445625s
	W0831 15:51:01.951971    4200 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-751000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-751000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:51:01.965389    4200 out.go:201] 
	W0831 15:51:01.976457    4200 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:51:01.976497    4200 out.go:270] * 
	* 
	W0831 15:51:01.977784    4200 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:51:01.989309    4200 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-751000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-751000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-751000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.781208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-751000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-751000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-751000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-31 15:51:02.083135 -0700 PDT m=+2769.025193126
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-751000 -n force-systemd-flag-751000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-751000 -n force-systemd-flag-751000: exit status 7 (37.0835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "force-systemd-flag-751000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-751000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-751000
--- FAIL: TestForceSystemdFlag (10.30s)

                                                
                                    
x
+
TestForceSystemdEnv (11.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-325000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-325000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.58806325s)

                                                
                                                
-- stdout --
	* [force-systemd-env-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-325000" primary control-plane node in "force-systemd-env-325000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-325000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:50:45.174051    4162 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:50:45.174187    4162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:45.174190    4162 out.go:358] Setting ErrFile to fd 2...
	I0831 15:50:45.174193    4162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:45.174309    4162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:50:45.175341    4162 out.go:352] Setting JSON to false
	I0831 15:50:45.191383    4162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3009,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:50:45.191450    4162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:50:45.198585    4162 out.go:177] * [force-systemd-env-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:50:45.207418    4162 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:50:45.207466    4162 notify.go:220] Checking for updates...
	I0831 15:50:45.214297    4162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:50:45.217370    4162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:50:45.220388    4162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:50:45.221824    4162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:50:45.225376    4162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0831 15:50:45.228793    4162 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:45.228846    4162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:50:45.233251    4162 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:50:45.240386    4162 start.go:297] selected driver: qemu2
	I0831 15:50:45.240394    4162 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:50:45.240401    4162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:50:45.242857    4162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:50:45.246409    4162 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:50:45.249486    4162 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:50:45.249501    4162 cni.go:84] Creating CNI manager for ""
	I0831 15:50:45.249508    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:50:45.249516    4162 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:50:45.249548    4162 start.go:340] cluster config:
	{Name:force-systemd-env-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:50:45.253326    4162 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:45.262321    4162 out.go:177] * Starting "force-systemd-env-325000" primary control-plane node in "force-systemd-env-325000" cluster
	I0831 15:50:45.266386    4162 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:50:45.266405    4162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:50:45.266423    4162 cache.go:56] Caching tarball of preloaded images
	I0831 15:50:45.266491    4162 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:50:45.266504    4162 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:50:45.266567    4162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/force-systemd-env-325000/config.json ...
	I0831 15:50:45.266579    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/force-systemd-env-325000/config.json: {Name:mk94ab1365598c18b46fde6898d650853f335a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:50:45.266818    4162 start.go:360] acquireMachinesLock for force-systemd-env-325000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:45.266857    4162 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "force-systemd-env-325000"
	I0831 15:50:45.266868    4162 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:45.266898    4162 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:45.274385    4162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:45.292322    4162 start.go:159] libmachine.API.Create for "force-systemd-env-325000" (driver="qemu2")
	I0831 15:50:45.292350    4162 client.go:168] LocalClient.Create starting
	I0831 15:50:45.292410    4162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:45.292447    4162 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:45.292457    4162 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:45.292495    4162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:45.292518    4162 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:45.292528    4162 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:45.292883    4162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:45.456155    4162 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:45.532520    4162 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:45.532526    4162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:45.532725    4162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:45.541745    4162 main.go:141] libmachine: STDOUT: 
	I0831 15:50:45.541763    4162 main.go:141] libmachine: STDERR: 
	I0831 15:50:45.541807    4162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2 +20000M
	I0831 15:50:45.549700    4162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:45.549713    4162 main.go:141] libmachine: STDERR: 
	I0831 15:50:45.549733    4162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:45.549737    4162 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:45.549751    4162 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:45.549779    4162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e9:22:5a:fc:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:45.551405    4162 main.go:141] libmachine: STDOUT: 
	I0831 15:50:45.551419    4162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:45.551437    4162 client.go:171] duration metric: took 259.089208ms to LocalClient.Create
	I0831 15:50:47.553453    4162 start.go:128] duration metric: took 2.286605875s to createHost
	I0831 15:50:47.553495    4162 start.go:83] releasing machines lock for "force-systemd-env-325000", held for 2.286689542s
	W0831 15:50:47.553516    4162 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:47.563067    4162 out.go:177] * Deleting "force-systemd-env-325000" in qemu2 ...
	W0831 15:50:47.573624    4162 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:47.573635    4162 start.go:729] Will try again in 5 seconds ...
	I0831 15:50:52.575731    4162 start.go:360] acquireMachinesLock for force-systemd-env-325000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:54.355053    4162 start.go:364] duration metric: took 1.779258792s to acquireMachinesLock for "force-systemd-env-325000"
	I0831 15:50:54.355179    4162 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:54.355437    4162 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:54.372097    4162 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0831 15:50:54.420183    4162 start.go:159] libmachine.API.Create for "force-systemd-env-325000" (driver="qemu2")
	I0831 15:50:54.420242    4162 client.go:168] LocalClient.Create starting
	I0831 15:50:54.420359    4162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:54.420417    4162 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:54.420434    4162 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:54.420502    4162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:54.420550    4162 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:54.420565    4162 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:54.421090    4162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:54.597753    4162 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:54.664369    4162 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:54.664378    4162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:54.664545    4162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:54.673968    4162 main.go:141] libmachine: STDOUT: 
	I0831 15:50:54.673987    4162 main.go:141] libmachine: STDERR: 
	I0831 15:50:54.674036    4162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2 +20000M
	I0831 15:50:54.681994    4162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:54.682009    4162 main.go:141] libmachine: STDERR: 
	I0831 15:50:54.682024    4162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:54.682028    4162 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:54.682040    4162 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:54.682064    4162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:b7:75:73:20:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/force-systemd-env-325000/disk.qcow2
	I0831 15:50:54.683691    4162 main.go:141] libmachine: STDOUT: 
	I0831 15:50:54.683706    4162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:54.683718    4162 client.go:171] duration metric: took 263.476042ms to LocalClient.Create
	I0831 15:50:56.685947    4162 start.go:128] duration metric: took 2.330513333s to createHost
	I0831 15:50:56.686020    4162 start.go:83] releasing machines lock for "force-systemd-env-325000", held for 2.330986083s
	W0831 15:50:56.686448    4162 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:56.699981    4162 out.go:201] 
	W0831 15:50:56.705162    4162 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:50:56.705214    4162 out.go:270] * 
	* 
	W0831 15:50:56.707494    4162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:50:56.717018    4162 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-325000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-325000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-325000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.094333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-325000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-325000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-325000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-31 15:50:56.811602 -0700 PDT m=+2763.753528793
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-325000 -n force-systemd-env-325000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-325000 -n force-systemd-env-325000: exit status 7 (35.034917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "force-systemd-env-325000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-325000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-325000
--- FAIL: TestForceSystemdEnv (11.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-221000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-221000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-65d86f57f4-v99wv" [7b164e4b-4dc7-4c3e-9604-070e8d2cf984] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-connect-65d86f57f4-v99wv" [7b164e4b-4dc7-4c3e-9604-070e8d2cf984] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010402083s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30828
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
E0831 15:22:32.462125    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.472096    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.485193    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.507824    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.551298    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.632748    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:32.796119    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:33.119516    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
E0831 15:22:33.761383    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:22:35.042989    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
E0831 15:22:42.729645    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30828: Get "http://192.168.105.4:30828": dial tcp 192.168.105.4:30828: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-221000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-v99wv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-221000/192.168.105.4
Start Time:       Sat, 31 Aug 2024 15:22:22 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://40ade4d986c9365751cfc873885d3efc044bfb1df0757ab5abd4bee0a531797e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 31 Aug 2024 15:22:36 -0700
Finished:     Sat, 31 Aug 2024 15:22:36 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nm72v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nm72v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-v99wv to functional-221000
Normal   Pulled     15s (x3 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    15s (x3 over 28s)  kubelet            Created container echoserver-arm
Normal   Started    15s (x3 over 28s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x3 over 26s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-v99wv_default(7b164e4b-4dc7-4c3e-9604-070e8d2cf984)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-221000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-221000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.151.127
IPs:                      10.98.151.127
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30828/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-221000 -n functional-221000
helpers_test.go:245: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 logs -n 25
helpers_test.go:253: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-221000 image ls                                                                                           | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	| image   | functional-221000 image save --daemon                                                                                | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | kicbase/echo-server:functional-221000                                                                                |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh echo                                                                                           | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | hello                                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh cat                                                                                            | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | /etc/hostname                                                                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-221000 tunnel                                                                                             | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-221000 tunnel                                                                                             | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-221000 tunnel                                                                                             | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| service | functional-221000 service list                                                                                       | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	| service | functional-221000 service list                                                                                       | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-221000 service                                                                                            | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-221000                                                                                                    | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-221000 service                                                                                            | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-221000 addons list                                                                                        | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	| addons  | functional-221000 addons list                                                                                        | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-221000 service                                                                                            | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-221000                                                                                                 | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh findmnt                                                                                        | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh -- ls                                                                                          | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh cat                                                                                            | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | /mount-9p/test-1725142965018430000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh stat                                                                                           | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh stat                                                                                           | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh sudo                                                                                           | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh findmnt                                                                                        | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-221000                                                                                                 | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1321680966/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-221000 ssh findmnt                                                                                        | functional-221000 | jenkins | v1.33.1 | 31 Aug 24 15:22 PDT | 31 Aug 24 15:22 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 15:21:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 15:21:27.128046    2067 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:21:27.128182    2067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:21:27.128184    2067 out.go:358] Setting ErrFile to fd 2...
	I0831 15:21:27.128186    2067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:21:27.128321    2067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:21:27.129451    2067 out.go:352] Setting JSON to false
	I0831 15:21:27.145842    2067 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1251,"bootTime":1725141636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:21:27.145935    2067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:21:27.150981    2067 out.go:177] * [functional-221000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:21:27.160118    2067 notify.go:220] Checking for updates...
	I0831 15:21:27.163922    2067 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:21:27.167937    2067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:21:27.172063    2067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:21:27.174985    2067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:21:27.177948    2067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:21:27.181021    2067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:21:27.184234    2067 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:21:27.184281    2067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:21:27.188939    2067 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:21:27.195062    2067 start.go:297] selected driver: qemu2
	I0831 15:21:27.195067    2067 start.go:901] validating driver "qemu2" against &{Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:21:27.195137    2067 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:21:27.197380    2067 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:21:27.197421    2067 cni.go:84] Creating CNI manager for ""
	I0831 15:21:27.197428    2067 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:21:27.197472    2067 start.go:340] cluster config:
	{Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-221000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:21:27.200683    2067 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:21:27.208961    2067 out.go:177] * Starting "functional-221000" primary control-plane node in "functional-221000" cluster
	I0831 15:21:27.212945    2067 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:21:27.212957    2067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:21:27.212969    2067 cache.go:56] Caching tarball of preloaded images
	I0831 15:21:27.213025    2067 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:21:27.213029    2067 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:21:27.213091    2067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/config.json ...
	I0831 15:21:27.213585    2067 start.go:360] acquireMachinesLock for functional-221000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:21:27.213613    2067 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "functional-221000"
	I0831 15:21:27.213619    2067 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:21:27.213624    2067 fix.go:54] fixHost starting: 
	I0831 15:21:27.214176    2067 fix.go:112] recreateIfNeeded on functional-221000: state=Running err=<nil>
	W0831 15:21:27.214181    2067 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:21:27.218843    2067 out.go:177] * Updating the running qemu2 "functional-221000" VM ...
	I0831 15:21:27.226968    2067 machine.go:93] provisionDockerMachine start ...
	I0831 15:21:27.226998    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.227109    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.227111    2067 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 15:21:27.282840    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-221000
	
	I0831 15:21:27.282848    2067 buildroot.go:166] provisioning hostname "functional-221000"
	I0831 15:21:27.282880    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.282997    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.283001    2067 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-221000 && echo "functional-221000" | sudo tee /etc/hostname
	I0831 15:21:27.341878    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-221000
	
	I0831 15:21:27.341918    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.342024    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.342030    2067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 15:21:27.393306    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:21:27.393316    2067 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18943-972/.minikube CaCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18943-972/.minikube}
	I0831 15:21:27.393324    2067 buildroot.go:174] setting up certificates
	I0831 15:21:27.393327    2067 provision.go:84] configureAuth start
	I0831 15:21:27.393330    2067 provision.go:143] copyHostCerts
	I0831 15:21:27.393391    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem, removing ...
	I0831 15:21:27.393394    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem
	I0831 15:21:27.393524    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem (1082 bytes)
	I0831 15:21:27.393704    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem, removing ...
	I0831 15:21:27.393706    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem
	I0831 15:21:27.393903    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem (1123 bytes)
	I0831 15:21:27.394026    2067 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem, removing ...
	I0831 15:21:27.394028    2067 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem
	I0831 15:21:27.394088    2067 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem (1679 bytes)
	I0831 15:21:27.394180    2067 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem org=jenkins.functional-221000 san=[127.0.0.1 192.168.105.4 functional-221000 localhost minikube]
	I0831 15:21:27.556750    2067 provision.go:177] copyRemoteCerts
	I0831 15:21:27.556786    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 15:21:27.556793    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:27.586369    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 15:21:27.595348    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0831 15:21:27.603813    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 15:21:27.611647    2067 provision.go:87] duration metric: took 218.321166ms to configureAuth
	I0831 15:21:27.611653    2067 buildroot.go:189] setting minikube options for container-runtime
	I0831 15:21:27.611750    2067 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:21:27.611781    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.611870    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.611873    2067 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 15:21:27.667275    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0831 15:21:27.667281    2067 buildroot.go:70] root file system type: tmpfs
	I0831 15:21:27.667331    2067 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 15:21:27.667394    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.667523    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.667554    2067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 15:21:27.734146    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 15:21:27.734211    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.734335    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.734341    2067 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 15:21:27.789541    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:21:27.789548    2067 machine.go:96] duration metric: took 562.589958ms to provisionDockerMachine
	I0831 15:21:27.789552    2067 start.go:293] postStartSetup for "functional-221000" (driver="qemu2")
	I0831 15:21:27.789557    2067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 15:21:27.789605    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 15:21:27.789612    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:27.818795    2067 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 15:21:27.820230    2067 info.go:137] Remote host: Buildroot 2023.02.9
	I0831 15:21:27.820235    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/addons for local assets ...
	I0831 15:21:27.820318    2067 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/files for local assets ...
	I0831 15:21:27.820424    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem -> 14322.pem in /etc/ssl/certs
	I0831 15:21:27.820535    2067 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/test/nested/copy/1432/hosts -> hosts in /etc/test/nested/copy/1432
	I0831 15:21:27.820565    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1432
	I0831 15:21:27.823750    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:21:27.831797    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/test/nested/copy/1432/hosts --> /etc/test/nested/copy/1432/hosts (40 bytes)
	I0831 15:21:27.840057    2067 start.go:296] duration metric: took 50.501833ms for postStartSetup
	I0831 15:21:27.840069    2067 fix.go:56] duration metric: took 626.461416ms for fixHost
	I0831 15:21:27.840104    2067 main.go:141] libmachine: Using SSH client type: native
	I0831 15:21:27.840211    2067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f545a0] 0x102f56e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0831 15:21:27.840214    2067 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 15:21:27.892918    2067 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725142887.893551669
	
	I0831 15:21:27.892923    2067 fix.go:216] guest clock: 1725142887.893551669
	I0831 15:21:27.892926    2067 fix.go:229] Guest: 2024-08-31 15:21:27.893551669 -0700 PDT Remote: 2024-08-31 15:21:27.84007 -0700 PDT m=+0.731142001 (delta=53.481669ms)
	I0831 15:21:27.892936    2067 fix.go:200] guest clock delta is within tolerance: 53.481669ms
	I0831 15:21:27.892938    2067 start.go:83] releasing machines lock for "functional-221000", held for 679.338041ms
	I0831 15:21:27.893234    2067 ssh_runner.go:195] Run: cat /version.json
	I0831 15:21:27.893234    2067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 15:21:27.893240    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:27.893251    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:27.966020    2067 ssh_runner.go:195] Run: systemctl --version
	I0831 15:21:27.968216    2067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 15:21:27.970089    2067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 15:21:27.970112    2067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 15:21:27.973883    2067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 15:21:27.973888    2067 start.go:495] detecting cgroup driver to use...
	I0831 15:21:27.973951    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:21:27.980201    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 15:21:27.983795    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 15:21:27.987434    2067 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 15:21:27.987456    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 15:21:27.991264    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:21:27.995217    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 15:21:27.999333    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:21:28.003402    2067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 15:21:28.007293    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 15:21:28.011060    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 15:21:28.015071    2067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 15:21:28.018949    2067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 15:21:28.022822    2067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 15:21:28.026148    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:28.130009    2067 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 15:21:28.142450    2067 start.go:495] detecting cgroup driver to use...
	I0831 15:21:28.142495    2067 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 15:21:28.149602    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:21:28.161340    2067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 15:21:28.186775    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:21:28.193453    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:21:28.199546    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:21:28.206232    2067 ssh_runner.go:195] Run: which cri-dockerd
	I0831 15:21:28.207637    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 15:21:28.211035    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0831 15:21:28.217000    2067 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 15:21:28.326905    2067 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 15:21:28.444294    2067 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 15:21:28.444348    2067 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 15:21:28.450592    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:28.590280    2067 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:21:40.897662    2067 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.307659458s)
	I0831 15:21:40.897725    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 15:21:40.905034    2067 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0831 15:21:40.913518    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:21:40.920982    2067 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 15:21:41.015062    2067 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 15:21:41.110019    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:41.192765    2067 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 15:21:41.200393    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:21:41.206665    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:41.276372    2067 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 15:21:41.304512    2067 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 15:21:41.304579    2067 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 15:21:41.307070    2067 start.go:563] Will wait 60s for crictl version
	I0831 15:21:41.307106    2067 ssh_runner.go:195] Run: which crictl
	I0831 15:21:41.308609    2067 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 15:21:41.320765    2067 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0831 15:21:41.320839    2067 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:21:41.334776    2067 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:21:41.345104    2067 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0831 15:21:41.345249    2067 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0831 15:21:41.353882    2067 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0831 15:21:41.356944    2067 kubeadm.go:883] updating cluster {Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 15:21:41.357022    2067 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:21:41.357083    2067 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:21:41.362733    2067 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-221000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0831 15:21:41.362737    2067 docker.go:615] Images already preloaded, skipping extraction
	I0831 15:21:41.362782    2067 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:21:41.368044    2067 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-221000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0831 15:21:41.368057    2067 cache_images.go:84] Images are preloaded, skipping loading
	I0831 15:21:41.368061    2067 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.0 docker true true} ...
	I0831 15:21:41.368132    2067 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 15:21:41.368176    2067 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 15:21:41.384000    2067 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0831 15:21:41.384073    2067 cni.go:84] Creating CNI manager for ""
	I0831 15:21:41.384079    2067 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:21:41.384083    2067 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 15:21:41.384092    2067 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-221000 NodeName:functional-221000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 15:21:41.384175    2067 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 15:21:41.384235    2067 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 15:21:41.387738    2067 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 15:21:41.387771    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 15:21:41.391149    2067 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0831 15:21:41.397315    2067 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 15:21:41.403195    2067 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0831 15:21:41.409479    2067 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0831 15:21:41.410985    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:41.483946    2067 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:21:41.490477    2067 certs.go:68] Setting up /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000 for IP: 192.168.105.4
	I0831 15:21:41.490479    2067 certs.go:194] generating shared ca certs ...
	I0831 15:21:41.490486    2067 certs.go:226] acquiring lock for ca certs: {Name:mk9afb7f1a72cfb2bc15164a899be0d76a69591d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:21:41.490625    2067 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key
	I0831 15:21:41.490676    2067 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key
	I0831 15:21:41.490681    2067 certs.go:256] generating profile certs ...
	I0831 15:21:41.490734    2067 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.key
	I0831 15:21:41.490788    2067 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/apiserver.key.831d7816
	I0831 15:21:41.490830    2067 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/proxy-client.key
	I0831 15:21:41.490994    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem (1338 bytes)
	W0831 15:21:41.491023    2067 certs.go:480] ignoring /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432_empty.pem, impossibly tiny 0 bytes
	I0831 15:21:41.491027    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 15:21:41.491048    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem (1082 bytes)
	I0831 15:21:41.491066    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem (1123 bytes)
	I0831 15:21:41.491090    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem (1679 bytes)
	I0831 15:21:41.491125    2067 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:21:41.491466    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 15:21:41.500165    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 15:21:41.508315    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 15:21:41.516463    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 15:21:41.524940    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0831 15:21:41.533499    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 15:21:41.541873    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 15:21:41.550217    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 15:21:41.558803    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /usr/share/ca-certificates/14322.pem (1708 bytes)
	I0831 15:21:41.567074    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 15:21:41.575082    2067 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem --> /usr/share/ca-certificates/1432.pem (1338 bytes)
	I0831 15:21:41.583045    2067 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 15:21:41.589224    2067 ssh_runner.go:195] Run: openssl version
	I0831 15:21:41.591191    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14322.pem && ln -fs /usr/share/ca-certificates/14322.pem /etc/ssl/certs/14322.pem"
	I0831 15:21:41.595220    2067 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14322.pem
	I0831 15:21:41.596851    2067 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:20 /usr/share/ca-certificates/14322.pem
	I0831 15:21:41.596876    2067 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14322.pem
	I0831 15:21:41.598854    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14322.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 15:21:41.602450    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 15:21:41.606226    2067 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:21:41.607742    2067 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:21:41.607758    2067 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:21:41.609771    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 15:21:41.613383    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1432.pem && ln -fs /usr/share/ca-certificates/1432.pem /etc/ssl/certs/1432.pem"
	I0831 15:21:41.617967    2067 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1432.pem
	I0831 15:21:41.619674    2067 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:20 /usr/share/ca-certificates/1432.pem
	I0831 15:21:41.619694    2067 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1432.pem
	I0831 15:21:41.621696    2067 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1432.pem /etc/ssl/certs/51391683.0"
	I0831 15:21:41.625462    2067 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 15:21:41.627089    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 15:21:41.629188    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 15:21:41.631166    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 15:21:41.633469    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 15:21:41.635645    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 15:21:41.637712    2067 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 15:21:41.639736    2067 kubeadm.go:392] StartCluster: {Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:21:41.639809    2067 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:21:41.648633    2067 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 15:21:41.652071    2067 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 15:21:41.652074    2067 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 15:21:41.652098    2067 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 15:21:41.655412    2067 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:21:41.655701    2067 kubeconfig.go:125] found "functional-221000" server: "https://192.168.105.4:8441"
	I0831 15:21:41.656320    2067 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 15:21:41.659567    2067 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0831 15:21:41.659572    2067 kubeadm.go:1160] stopping kube-system containers ...
	I0831 15:21:41.659618    2067 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:21:41.666622    2067 docker.go:483] Stopping containers: [45948d30c347 4ba37f35ef49 b8822ec23867 79c805b1288f 1fbcb2c82bbe 1e03326bc823 f5619c13909c 04b7ad0da85f 06cb2ffdfa93 89ce027c7209 7ad3fb94cbaf 2b5055df6692 c966bd867d83 b6f718680b56 200d51e7ed34 71688eada235 94e10bfb997f 08279b425d5a 252096e62304 39e861ed52db 748e45e95cce 358fcdc61161 756f3b496599 cedb2c6d48da 4e4848f5c441 58fbee1875ec 3c4ceeac907c 39105b5cb903 375dd89ed883]
	I0831 15:21:41.666680    2067 ssh_runner.go:195] Run: docker stop 45948d30c347 4ba37f35ef49 b8822ec23867 79c805b1288f 1fbcb2c82bbe 1e03326bc823 f5619c13909c 04b7ad0da85f 06cb2ffdfa93 89ce027c7209 7ad3fb94cbaf 2b5055df6692 c966bd867d83 b6f718680b56 200d51e7ed34 71688eada235 94e10bfb997f 08279b425d5a 252096e62304 39e861ed52db 748e45e95cce 358fcdc61161 756f3b496599 cedb2c6d48da 4e4848f5c441 58fbee1875ec 3c4ceeac907c 39105b5cb903 375dd89ed883
	I0831 15:21:41.673510    2067 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0831 15:21:41.786126    2067 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:21:41.792394    2067 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug 31 22:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 31 22:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 31 22:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 31 22:20 /etc/kubernetes/scheduler.conf
	
	I0831 15:21:41.792434    2067 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0831 15:21:41.797406    2067 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0831 15:21:41.801969    2067 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0831 15:21:41.806495    2067 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:21:41.806523    2067 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:21:41.810767    2067 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0831 15:21:41.814795    2067 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:21:41.814815    2067 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:21:41.818843    2067 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:21:41.822784    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:41.841626    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:42.302491    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:42.430039    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:42.452708    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:42.471308    2067 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:21:42.471380    2067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:21:42.973453    2067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:21:43.473456    2067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:21:43.479101    2067 api_server.go:72] duration metric: took 1.007816583s to wait for apiserver process to appear ...
	I0831 15:21:43.479107    2067 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:21:43.479116    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:45.495406    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0831 15:21:45.495421    2067 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0831 15:21:45.495426    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:45.534995    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 15:21:45.535007    2067 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 15:21:45.979673    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:45.992164    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 15:21:45.992190    2067 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 15:21:46.479617    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:46.493474    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 15:21:46.493503    2067 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 15:21:46.979943    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:46.982774    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0831 15:21:46.986845    2067 api_server.go:141] control plane version: v1.31.0
	I0831 15:21:46.986854    2067 api_server.go:131] duration metric: took 3.507826334s to wait for apiserver health ...
	I0831 15:21:46.986858    2067 cni.go:84] Creating CNI manager for ""
	I0831 15:21:46.986864    2067 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:21:47.061891    2067 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 15:21:47.066064    2067 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 15:21:47.070787    2067 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 15:21:47.083689    2067 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 15:21:47.090165    2067 system_pods.go:59] 7 kube-system pods found
	I0831 15:21:47.090179    2067 system_pods.go:61] "coredns-6f6b679f8f-7rkzs" [f60a3743-2587-4cd1-9661-eca50c958c35] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 15:21:47.090182    2067 system_pods.go:61] "etcd-functional-221000" [064f89a2-48f4-4169-96ec-3309e746e984] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0831 15:21:47.090186    2067 system_pods.go:61] "kube-apiserver-functional-221000" [624f046e-4453-415b-837d-03ab8b4c39ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0831 15:21:47.090188    2067 system_pods.go:61] "kube-controller-manager-functional-221000" [522cc570-9ca1-4c50-9da4-729b475b46b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0831 15:21:47.090190    2067 system_pods.go:61] "kube-proxy-24lmn" [26fab0f0-0f6e-4c7c-87e8-4de2c0d94d5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0831 15:21:47.090192    2067 system_pods.go:61] "kube-scheduler-functional-221000" [340bb34e-321b-44be-a075-540b8cde5827] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0831 15:21:47.090193    2067 system_pods.go:61] "storage-provisioner" [ae0ec11c-af2f-4c29-8beb-e7d90e24ed30] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0831 15:21:47.090199    2067 system_pods.go:74] duration metric: took 6.500541ms to wait for pod list to return data ...
	I0831 15:21:47.090202    2067 node_conditions.go:102] verifying NodePressure condition ...
	I0831 15:21:47.094435    2067 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 15:21:47.094447    2067 node_conditions.go:123] node cpu capacity is 2
	I0831 15:21:47.094453    2067 node_conditions.go:105] duration metric: took 4.249209ms to run NodePressure ...
	I0831 15:21:47.094465    2067 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:21:47.316914    2067 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0831 15:21:47.319600    2067 kubeadm.go:739] kubelet initialised
	I0831 15:21:47.319605    2067 kubeadm.go:740] duration metric: took 2.681875ms waiting for restarted kubelet to initialise ...
	I0831 15:21:47.319609    2067 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:21:47.322746    2067 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:49.328518    2067 pod_ready.go:103] pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace has status "Ready":"False"
	I0831 15:21:49.836590    2067 pod_ready.go:93] pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:49.836607    2067 pod_ready.go:82] duration metric: took 2.513912208s for pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:49.836618    2067 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:51.850211    2067 pod_ready.go:103] pod "etcd-functional-221000" in "kube-system" namespace has status "Ready":"False"
	I0831 15:21:53.852322    2067 pod_ready.go:103] pod "etcd-functional-221000" in "kube-system" namespace has status "Ready":"False"
	I0831 15:21:56.346717    2067 pod_ready.go:93] pod "etcd-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:56.346732    2067 pod_ready.go:82] duration metric: took 6.510259375s for pod "etcd-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.346743    2067 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.351927    2067 pod_ready.go:93] pod "kube-apiserver-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:56.351933    2067 pod_ready.go:82] duration metric: took 5.183292ms for pod "kube-apiserver-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.351944    2067 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.356942    2067 pod_ready.go:93] pod "kube-controller-manager-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:56.356947    2067 pod_ready.go:82] duration metric: took 4.998333ms for pod "kube-controller-manager-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.356953    2067 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-24lmn" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.361036    2067 pod_ready.go:93] pod "kube-proxy-24lmn" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:56.361041    2067 pod_ready.go:82] duration metric: took 4.083875ms for pod "kube-proxy-24lmn" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.361047    2067 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.365132    2067 pod_ready.go:93] pod "kube-scheduler-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:56.365135    2067 pod_ready.go:82] duration metric: took 4.08525ms for pod "kube-scheduler-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.365141    2067 pod_ready.go:39] duration metric: took 9.045740333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:21:56.365155    2067 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 15:21:56.373216    2067 ops.go:34] apiserver oom_adj: -16
	I0831 15:21:56.373223    2067 kubeadm.go:597] duration metric: took 14.721491083s to restartPrimaryControlPlane
	I0831 15:21:56.373228    2067 kubeadm.go:394] duration metric: took 14.733847292s to StartCluster
	I0831 15:21:56.373244    2067 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:21:56.373427    2067 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:21:56.374035    2067 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:21:56.374463    2067 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:21:56.374488    2067 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 15:21:56.374559    2067 addons.go:69] Setting storage-provisioner=true in profile "functional-221000"
	I0831 15:21:56.374579    2067 addons.go:234] Setting addon storage-provisioner=true in "functional-221000"
	W0831 15:21:56.374584    2067 addons.go:243] addon storage-provisioner should already be in state true
	I0831 15:21:56.374597    2067 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:21:56.374607    2067 host.go:66] Checking if "functional-221000" exists ...
	I0831 15:21:56.374608    2067 addons.go:69] Setting default-storageclass=true in profile "functional-221000"
	I0831 15:21:56.374630    2067 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-221000"
	I0831 15:21:56.376079    2067 addons.go:234] Setting addon default-storageclass=true in "functional-221000"
	W0831 15:21:56.376084    2067 addons.go:243] addon default-storageclass should already be in state true
	I0831 15:21:56.376097    2067 host.go:66] Checking if "functional-221000" exists ...
	I0831 15:21:56.379061    2067 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 15:21:56.379066    2067 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 15:21:56.379076    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:56.382394    2067 out.go:177] * Verifying Kubernetes components...
	I0831 15:21:56.386477    2067 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:21:56.390516    2067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:21:56.393500    2067 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:21:56.393504    2067 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 15:21:56.393512    2067 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
	I0831 15:21:56.496790    2067 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:21:56.508207    2067 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 15:21:56.512212    2067 node_ready.go:35] waiting up to 6m0s for node "functional-221000" to be "Ready" ...
	I0831 15:21:56.541064    2067 node_ready.go:49] node "functional-221000" has status "Ready":"True"
	I0831 15:21:56.541069    2067 node_ready.go:38] duration metric: took 28.848041ms for node "functional-221000" to be "Ready" ...
	I0831 15:21:56.541073    2067 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:21:56.571339    2067 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:21:56.742642    2067 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:56.848826    2067 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0831 15:21:56.852754    2067 addons.go:510] duration metric: took 478.290791ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0831 15:21:57.142190    2067 pod_ready.go:93] pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:57.142200    2067 pod_ready.go:82] duration metric: took 399.558ms for pod "coredns-6f6b679f8f-7rkzs" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:57.142208    2067 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:57.541092    2067 pod_ready.go:93] pod "etcd-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:57.541099    2067 pod_ready.go:82] duration metric: took 398.895792ms for pod "etcd-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:57.541109    2067 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:57.946007    2067 pod_ready.go:93] pod "kube-apiserver-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:57.946045    2067 pod_ready.go:82] duration metric: took 404.934459ms for pod "kube-apiserver-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:57.946071    2067 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:58.346533    2067 pod_ready.go:93] pod "kube-controller-manager-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:58.346566    2067 pod_ready.go:82] duration metric: took 400.488584ms for pod "kube-controller-manager-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:58.346589    2067 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-24lmn" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:58.746824    2067 pod_ready.go:93] pod "kube-proxy-24lmn" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:58.746861    2067 pod_ready.go:82] duration metric: took 400.26525ms for pod "kube-proxy-24lmn" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:58.746887    2067 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:59.143350    2067 pod_ready.go:93] pod "kube-scheduler-functional-221000" in "kube-system" namespace has status "Ready":"True"
	I0831 15:21:59.143366    2067 pod_ready.go:82] duration metric: took 396.475584ms for pod "kube-scheduler-functional-221000" in "kube-system" namespace to be "Ready" ...
	I0831 15:21:59.143379    2067 pod_ready.go:39] duration metric: took 2.602360584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 15:21:59.143406    2067 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:21:59.143582    2067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:21:59.157745    2067 api_server.go:72] duration metric: took 2.783330042s to wait for apiserver process to appear ...
	I0831 15:21:59.157755    2067 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:21:59.157769    2067 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0831 15:21:59.162885    2067 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0831 15:21:59.163730    2067 api_server.go:141] control plane version: v1.31.0
	I0831 15:21:59.163738    2067 api_server.go:131] duration metric: took 5.979666ms to wait for apiserver health ...
	I0831 15:21:59.163750    2067 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 15:21:59.352833    2067 system_pods.go:59] 7 kube-system pods found
	I0831 15:21:59.352865    2067 system_pods.go:61] "coredns-6f6b679f8f-7rkzs" [f60a3743-2587-4cd1-9661-eca50c958c35] Running
	I0831 15:21:59.352872    2067 system_pods.go:61] "etcd-functional-221000" [064f89a2-48f4-4169-96ec-3309e746e984] Running
	I0831 15:21:59.352878    2067 system_pods.go:61] "kube-apiserver-functional-221000" [624f046e-4453-415b-837d-03ab8b4c39ba] Running
	I0831 15:21:59.352888    2067 system_pods.go:61] "kube-controller-manager-functional-221000" [522cc570-9ca1-4c50-9da4-729b475b46b6] Running
	I0831 15:21:59.352893    2067 system_pods.go:61] "kube-proxy-24lmn" [26fab0f0-0f6e-4c7c-87e8-4de2c0d94d5b] Running
	I0831 15:21:59.352901    2067 system_pods.go:61] "kube-scheduler-functional-221000" [340bb34e-321b-44be-a075-540b8cde5827] Running
	I0831 15:21:59.352911    2067 system_pods.go:61] "storage-provisioner" [ae0ec11c-af2f-4c29-8beb-e7d90e24ed30] Running
	I0831 15:21:59.352920    2067 system_pods.go:74] duration metric: took 189.168083ms to wait for pod list to return data ...
	I0831 15:21:59.352931    2067 default_sa.go:34] waiting for default service account to be created ...
	I0831 15:21:59.544734    2067 default_sa.go:45] found service account: "default"
	I0831 15:21:59.544749    2067 default_sa.go:55] duration metric: took 191.817667ms for default service account to be created ...
	I0831 15:21:59.544762    2067 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 15:21:59.750504    2067 system_pods.go:86] 7 kube-system pods found
	I0831 15:21:59.750534    2067 system_pods.go:89] "coredns-6f6b679f8f-7rkzs" [f60a3743-2587-4cd1-9661-eca50c958c35] Running
	I0831 15:21:59.750541    2067 system_pods.go:89] "etcd-functional-221000" [064f89a2-48f4-4169-96ec-3309e746e984] Running
	I0831 15:21:59.750546    2067 system_pods.go:89] "kube-apiserver-functional-221000" [624f046e-4453-415b-837d-03ab8b4c39ba] Running
	I0831 15:21:59.750551    2067 system_pods.go:89] "kube-controller-manager-functional-221000" [522cc570-9ca1-4c50-9da4-729b475b46b6] Running
	I0831 15:21:59.750555    2067 system_pods.go:89] "kube-proxy-24lmn" [26fab0f0-0f6e-4c7c-87e8-4de2c0d94d5b] Running
	I0831 15:21:59.750558    2067 system_pods.go:89] "kube-scheduler-functional-221000" [340bb34e-321b-44be-a075-540b8cde5827] Running
	I0831 15:21:59.750562    2067 system_pods.go:89] "storage-provisioner" [ae0ec11c-af2f-4c29-8beb-e7d90e24ed30] Running
	I0831 15:21:59.750575    2067 system_pods.go:126] duration metric: took 205.811541ms to wait for k8s-apps to be running ...
	I0831 15:21:59.750584    2067 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 15:21:59.750758    2067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 15:21:59.771853    2067 system_svc.go:56] duration metric: took 21.26825ms WaitForService to wait for kubelet
	I0831 15:21:59.771866    2067 kubeadm.go:582] duration metric: took 3.397465333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:21:59.771884    2067 node_conditions.go:102] verifying NodePressure condition ...
	I0831 15:21:59.948077    2067 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0831 15:21:59.948103    2067 node_conditions.go:123] node cpu capacity is 2
	I0831 15:21:59.948125    2067 node_conditions.go:105] duration metric: took 176.239666ms to run NodePressure ...
	I0831 15:21:59.948149    2067 start.go:241] waiting for startup goroutines ...
	I0831 15:21:59.948162    2067 start.go:246] waiting for cluster config update ...
	I0831 15:21:59.948182    2067 start.go:255] writing updated cluster config ...
	I0831 15:21:59.949472    2067 ssh_runner.go:195] Run: rm -f paused
	I0831 15:22:00.014750    2067 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0831 15:22:00.017955    2067 out.go:201] 
	W0831 15:22:00.021951    2067 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0831 15:22:00.024902    2067 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0831 15:22:00.030261    2067 out.go:177] * Done! kubectl is now configured to use "functional-221000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 31 22:22:37 functional-221000 dockerd[5661]: time="2024-08-31T22:22:37.644141453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:37 functional-221000 cri-dockerd[5916]: time="2024-08-31T22:22:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4ab33e8d0a9de55e95a2d00819978d54b6420134ad14b95b6b7e430880a9c37/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 31 22:22:38 functional-221000 cri-dockerd[5916]: time="2024-08-31T22:22:38Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 31 22:22:38 functional-221000 dockerd[5661]: time="2024-08-31T22:22:38.480656594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 31 22:22:38 functional-221000 dockerd[5661]: time="2024-08-31T22:22:38.480697968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 31 22:22:38 functional-221000 dockerd[5661]: time="2024-08-31T22:22:38.480710218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:38 functional-221000 dockerd[5661]: time="2024-08-31T22:22:38.480751302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:46 functional-221000 dockerd[5661]: time="2024-08-31T22:22:46.589590161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 31 22:22:46 functional-221000 dockerd[5661]: time="2024-08-31T22:22:46.589643369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 31 22:22:46 functional-221000 dockerd[5661]: time="2024-08-31T22:22:46.589658161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:46 functional-221000 dockerd[5661]: time="2024-08-31T22:22:46.589692036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:46 functional-221000 cri-dockerd[5916]: time="2024-08-31T22:22:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6db1063482efd4642c539550fbaa581d758703d51a976bbbe60f48d2043f5d3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 31 22:22:48 functional-221000 cri-dockerd[5916]: time="2024-08-31T22:22:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.084219790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.084273456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.084282498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.084312206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 22:22:48 functional-221000 dockerd[5655]: time="2024-08-31T22:22:48.117138186Z" level=info msg="ignoring event" container=7dce4e3095d87843366bdb638383319174514893be217ab6f734e539939d5aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.117353518Z" level=info msg="shim disconnected" id=7dce4e3095d87843366bdb638383319174514893be217ab6f734e539939d5aed namespace=moby
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.117411559Z" level=warning msg="cleaning up after shim disconnected" id=7dce4e3095d87843366bdb638383319174514893be217ab6f734e539939d5aed namespace=moby
	Aug 31 22:22:48 functional-221000 dockerd[5661]: time="2024-08-31T22:22:48.117420768Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 31 22:22:49 functional-221000 dockerd[5661]: time="2024-08-31T22:22:49.516527228Z" level=info msg="shim disconnected" id=f6db1063482efd4642c539550fbaa581d758703d51a976bbbe60f48d2043f5d3 namespace=moby
	Aug 31 22:22:49 functional-221000 dockerd[5655]: time="2024-08-31T22:22:49.516747269Z" level=info msg="ignoring event" container=f6db1063482efd4642c539550fbaa581d758703d51a976bbbe60f48d2043f5d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:22:49 functional-221000 dockerd[5661]: time="2024-08-31T22:22:49.517051684Z" level=warning msg="cleaning up after shim disconnected" id=f6db1063482efd4642c539550fbaa581d758703d51a976bbbe60f48d2043f5d3 namespace=moby
	Aug 31 22:22:49 functional-221000 dockerd[5661]: time="2024-08-31T22:22:49.517061601Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7dce4e3095d87       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 seconds ago        Exited              mount-munger              0                   f6db1063482ef       busybox-mount
	b9be0ebade385       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         13 seconds ago       Running             myfrontend                0                   e4ab33e8d0a9d       sp-pod
	40ade4d986c93       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            2                   a19fe87a734af       hello-node-connect-65d86f57f4-v99wv
	bcb4ea034533b       72565bf5bbedf                                                                                         25 seconds ago       Exited              echoserver-arm            2                   08aefa885cafb       hello-node-64b4f8f9ff-rmffc
	83f829a61b8ff       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         36 seconds ago       Running             nginx                     0                   a1a84f432e38c       nginx-svc
	9b13719354320       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   17eb595141d83       coredns-6f6b679f8f-7rkzs
	c9c4970098a70       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   156140f975f6a       kube-proxy-24lmn
	d8b4a7df5661b       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   6d8f5c210618a       storage-provisioner
	697c74fffc2e4       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   adb32550a968c       kube-scheduler-functional-221000
	9d8f2158731e3       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   a58a407af250e       etcd-functional-221000
	11d628f3793dd       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   ebeaa9d496a68       kube-controller-manager-functional-221000
	74edd28a348e6       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   71c6dd8268636       kube-apiserver-functional-221000
	45948d30c3478       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   79c805b1288fd       coredns-6f6b679f8f-7rkzs
	4ba37f35ef49c       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   1fbcb2c82bbe7       storage-provisioner
	b8822ec238679       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   1e03326bc823a       kube-proxy-24lmn
	f5619c13909cd       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   c966bd867d839       kube-controller-manager-functional-221000
	06cb2ffdfa93e       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   2b5055df66920       etcd-functional-221000
	89ce027c72096       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   b6f718680b56f       kube-scheduler-functional-221000
	
	
	==> coredns [45948d30c347] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36799 - 606 "HINFO IN 3921610683408791810.8743435914112216122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008707446s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b1371935432] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41237 - 30869 "HINFO IN 5870888341075441041.2866895591537115127. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009629723s
	[INFO] 10.244.0.1:10016 - 64822 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000092708s
	[INFO] 10.244.0.1:26616 - 63330 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000089291s
	[INFO] 10.244.0.1:47101 - 1971 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000033041s
	[INFO] 10.244.0.1:34638 - 12336 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001870368s
	[INFO] 10.244.0.1:49033 - 36858 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000069208s
	[INFO] 10.244.0.1:23510 - 17759 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000128583s
	
	
	==> describe nodes <==
	Name:               functional-221000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-221000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=functional-221000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T15_20_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:20:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-221000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:22:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:22:46 +0000   Sat, 31 Aug 2024 22:20:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:22:46 +0000   Sat, 31 Aug 2024 22:20:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:22:46 +0000   Sat, 31 Aug 2024 22:20:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:22:46 +0000   Sat, 31 Aug 2024 22:20:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-221000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bc835ca40fd495fa2de4499349801cc
	  System UUID:                7bc835ca40fd495fa2de4499349801cc
	  Boot ID:                    8d619282-24ec-45d0-bca2-63bbdc1cb8a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-rmffc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     hello-node-connect-65d86f57f4-v99wv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-6f6b679f8f-7rkzs                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m14s
	  kube-system                 etcd-functional-221000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m21s
	  kube-system                 kube-apiserver-functional-221000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-functional-221000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-24lmn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-functional-221000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m14s                kube-proxy       
	  Normal  Starting                 64s                  kube-proxy       
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m20s                kubelet          Node functional-221000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m20s                kubelet          Node functional-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s                kubelet          Node functional-221000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m16s                kubelet          Node functional-221000 status is now: NodeReady
	  Normal  RegisteredNode           2m15s                node-controller  Node functional-221000 event: Registered Node functional-221000 in Controller
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node functional-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node functional-221000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node functional-221000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                 node-controller  Node functional-221000 event: Registered Node functional-221000 in Controller
	  Normal  Starting                 69s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)    kubelet          Node functional-221000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)    kubelet          Node functional-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)    kubelet          Node functional-221000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                  node-controller  Node functional-221000 event: Registered Node functional-221000 in Controller
	
	
	==> dmesg <==
	[  +0.976717] systemd-fstab-generator[3821]: Ignoring "noauto" option for root device
	[Aug31 22:21] kauditd_printk_skb: 199 callbacks suppressed
	[  +7.368538] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.739079] systemd-fstab-generator[4736]: Ignoring "noauto" option for root device
	[ +11.036452] systemd-fstab-generator[5170]: Ignoring "noauto" option for root device
	[  +0.057824] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.144055] systemd-fstab-generator[5207]: Ignoring "noauto" option for root device
	[  +0.118559] systemd-fstab-generator[5219]: Ignoring "noauto" option for root device
	[  +0.145101] systemd-fstab-generator[5235]: Ignoring "noauto" option for root device
	[  +5.101613] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.340477] systemd-fstab-generator[5869]: Ignoring "noauto" option for root device
	[  +0.095791] systemd-fstab-generator[5881]: Ignoring "noauto" option for root device
	[  +0.084011] systemd-fstab-generator[5893]: Ignoring "noauto" option for root device
	[  +0.082390] systemd-fstab-generator[5908]: Ignoring "noauto" option for root device
	[  +0.208689] systemd-fstab-generator[6076]: Ignoring "noauto" option for root device
	[  +0.938179] systemd-fstab-generator[6199]: Ignoring "noauto" option for root device
	[  +4.427081] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.627841] systemd-fstab-generator[7215]: Ignoring "noauto" option for root device
	[  +0.054022] kauditd_printk_skb: 35 callbacks suppressed
	[Aug31 22:22] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.442440] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.086066] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.047478] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.709189] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.487585] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [06cb2ffdfa93] <==
	{"level":"info","ts":"2024-08-31T22:21:00.505997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:00.506078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-31T22:21:00.506118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:00.506136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:00.506229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:00.506384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:00.509416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:00.509991Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:00.509406Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-221000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:21:00.510904Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:21:00.511153Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:21:00.512508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:00.513696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:00.514743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-31T22:21:00.515536Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:21:28.631365Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-31T22:21:28.631409Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-221000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-31T22:21:28.631452Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:21:28.631499Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:21:28.639856Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-31T22:21:28.639880Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-31T22:21:28.641141Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-31T22:21:28.642519Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-31T22:21:28.642563Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-31T22:21:28.642568Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-221000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [9d8f2158731e] <==
	{"level":"info","ts":"2024-08-31T22:21:43.324939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T22:21:43.324950Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T22:21:43.324954Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-31T22:21:43.329144Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-31T22:21:43.329700Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-31T22:21:43.329223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-31T22:21:43.329786Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-31T22:21:43.329840Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:43.329889Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:21:44.982632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:44.982795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:44.982859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-31T22:21:44.982892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-31T22:21:44.983318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-31T22:21:44.983352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-31T22:21:44.983378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-31T22:21:44.988061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:44.988513Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:21:44.988067Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-221000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:21:44.989247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:21:44.989429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:21:44.990647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:44.990647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:21:44.992612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-31T22:21:44.993837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:22:51 up 2 min,  0 users,  load average: 0.60, 0.33, 0.13
	Linux functional-221000 5.10.207 #1 SMP PREEMPT Wed Aug 28 18:03:56 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74edd28a348e] <==
	I0831 22:21:45.580211       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 22:21:45.580456       1 aggregator.go:171] initial CRD sync complete...
	I0831 22:21:45.580469       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 22:21:45.580472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 22:21:45.580475       1 cache.go:39] Caches are synced for autoregister controller
	I0831 22:21:45.580570       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 22:21:45.581056       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 22:21:45.581104       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 22:21:45.581125       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 22:21:45.586596       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 22:21:45.606815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 22:21:46.497225       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 22:21:46.691012       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0831 22:21:46.691774       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 22:21:46.694001       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0831 22:21:47.129403       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0831 22:21:47.133265       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0831 22:21:47.146811       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0831 22:21:47.153966       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 22:21:47.155920       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0831 22:22:01.477717       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.60.174"}
	I0831 22:22:07.222151       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0831 22:22:07.264553       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.234.247"}
	I0831 22:22:12.352665       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.104.181"}
	I0831 22:22:22.784147       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.151.127"}
	
	
	==> kube-controller-manager [11d628f3793d] <==
	I0831 22:21:49.109822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="75.582µs"
	I0831 22:21:49.365710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="4.321975ms"
	I0831 22:21:49.365997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="262.621µs"
	I0831 22:21:49.469250       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 22:21:49.504593       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 22:21:49.504618       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0831 22:22:07.233892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.684252ms"
	I0831 22:22:07.242405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.489094ms"
	I0831 22:22:07.242435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.417µs"
	I0831 22:22:12.834699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.084µs"
	I0831 22:22:13.873833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="38.458µs"
	I0831 22:22:14.881346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="25.25µs"
	I0831 22:22:16.292639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-221000"
	I0831 22:22:22.750013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="6.930098ms"
	I0831 22:22:22.755516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.45952ms"
	I0831 22:22:22.755550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="13.292µs"
	I0831 22:22:22.759550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.792µs"
	I0831 22:22:24.017765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="20.333µs"
	I0831 22:22:25.045770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.583µs"
	I0831 22:22:27.081140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="38.833µs"
	I0831 22:22:36.511500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="83.875µs"
	I0831 22:22:37.221073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="88.458µs"
	I0831 22:22:38.511290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.917µs"
	I0831 22:22:46.932639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-221000"
	I0831 22:22:48.520382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="49.958µs"
	
	
	==> kube-controller-manager [f5619c13909c] <==
	I0831 22:21:04.461096       1 shared_informer.go:320] Caches are synced for attach detach
	I0831 22:21:04.462237       1 shared_informer.go:320] Caches are synced for PVC protection
	I0831 22:21:04.463337       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0831 22:21:04.469639       1 shared_informer.go:320] Caches are synced for daemon sets
	I0831 22:21:04.482187       1 shared_informer.go:320] Caches are synced for taint
	I0831 22:21:04.482206       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0831 22:21:04.482212       1 shared_informer.go:320] Caches are synced for ephemeral
	I0831 22:21:04.482346       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0831 22:21:04.482407       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-221000"
	I0831 22:21:04.482216       1 shared_informer.go:320] Caches are synced for job
	I0831 22:21:04.482473       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0831 22:21:04.537022       1 shared_informer.go:320] Caches are synced for deployment
	I0831 22:21:04.559222       1 shared_informer.go:320] Caches are synced for disruption
	I0831 22:21:04.568767       1 shared_informer.go:320] Caches are synced for resource quota
	I0831 22:21:04.582421       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0831 22:21:04.582497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.291µs"
	I0831 22:21:04.583913       1 shared_informer.go:320] Caches are synced for resource quota
	I0831 22:21:04.997601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 22:21:05.048934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 22:21:05.049089       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0831 22:21:09.307232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.646679ms"
	I0831 22:21:09.326261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.062926ms"
	I0831 22:21:09.326929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.083µs"
	I0831 22:21:09.855700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.974965ms"
	I0831 22:21:09.857805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="30.833µs"
	
	
	==> kube-proxy [b8822ec23867] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:21:02.151827       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:21:02.166944       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0831 22:21:02.166983       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:21:02.192260       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:21:02.192283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:21:02.192301       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:21:02.196022       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:21:02.196139       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:21:02.196224       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:21:02.196740       1 config.go:197] "Starting service config controller"
	I0831 22:21:02.196750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:21:02.196760       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:21:02.196762       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:21:02.197045       1 config.go:326] "Starting node config controller"
	I0831 22:21:02.197048       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:21:02.297145       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:21:02.297174       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:21:02.297185       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c9c4970098a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0831 22:21:47.023669       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0831 22:21:47.027849       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0831 22:21:47.027877       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:21:47.093540       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0831 22:21:47.093561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0831 22:21:47.093577       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:21:47.094322       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:21:47.094535       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:21:47.094707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:21:47.095435       1 config.go:197] "Starting service config controller"
	I0831 22:21:47.095489       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:21:47.095521       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:21:47.095537       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:21:47.095933       1 config.go:326] "Starting node config controller"
	I0831 22:21:47.096794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:21:47.196103       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:21:47.196155       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:21:47.196878       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [697c74fffc2e] <==
	I0831 22:21:43.516627       1 serving.go:386] Generated self-signed cert in-memory
	W0831 22:21:45.501330       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 22:21:45.501372       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 22:21:45.501400       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 22:21:45.501408       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 22:21:45.524105       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0831 22:21:45.525059       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:21:45.531206       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0831 22:21:45.531271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0831 22:21:45.531288       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 22:21:45.531310       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0831 22:21:45.634904       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [89ce027c7209] <==
	I0831 22:20:58.992162       1 serving.go:386] Generated self-signed cert in-memory
	W0831 22:21:01.028055       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0831 22:21:01.028105       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 22:21:01.028120       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0831 22:21:01.028127       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0831 22:21:01.046473       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0831 22:21:01.046514       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:21:01.047511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0831 22:21:01.047552       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0831 22:21:01.047774       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0831 22:21:01.047900       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0831 22:21:01.150061       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 22:21:28.636385       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 31 22:22:37 functional-221000 kubelet[6206]: E0831 22:22:37.278597    6206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a39d87e-9892-4cf4-91d0-f7023dfd0c17" containerName="myfrontend"
	Aug 31 22:22:37 functional-221000 kubelet[6206]: I0831 22:22:37.278716    6206 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a39d87e-9892-4cf4-91d0-f7023dfd0c17" containerName="myfrontend"
	Aug 31 22:22:37 functional-221000 kubelet[6206]: I0831 22:22:37.393775    6206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d864w\" (UniqueName: \"kubernetes.io/projected/c31e88bc-5f56-468e-bd84-bf76f5cc4274-kube-api-access-d864w\") pod \"sp-pod\" (UID: \"c31e88bc-5f56-468e-bd84-bf76f5cc4274\") " pod="default/sp-pod"
	Aug 31 22:22:37 functional-221000 kubelet[6206]: I0831 22:22:37.393814    6206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a\" (UniqueName: \"kubernetes.io/host-path/c31e88bc-5f56-468e-bd84-bf76f5cc4274-pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a\") pod \"sp-pod\" (UID: \"c31e88bc-5f56-468e-bd84-bf76f5cc4274\") " pod="default/sp-pod"
	Aug 31 22:22:38 functional-221000 kubelet[6206]: I0831 22:22:38.503729    6206 scope.go:117] "RemoveContainer" containerID="bcb4ea034533bdf3aa1cab189f93f4e6bd4bc20f3ff255de83acda4dd220ad0e"
	Aug 31 22:22:38 functional-221000 kubelet[6206]: E0831 22:22:38.503821    6206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-rmffc_default(154dd718-9dfb-4b9c-b955-928c716c875d)\"" pod="default/hello-node-64b4f8f9ff-rmffc" podUID="154dd718-9dfb-4b9c-b955-928c716c875d"
	Aug 31 22:22:38 functional-221000 kubelet[6206]: I0831 22:22:38.514523    6206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a39d87e-9892-4cf4-91d0-f7023dfd0c17" path="/var/lib/kubelet/pods/1a39d87e-9892-4cf4-91d0-f7023dfd0c17/volumes"
	Aug 31 22:22:39 functional-221000 kubelet[6206]: I0831 22:22:39.277548    6206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.527452524 podStartE2EDuration="2.277524383s" podCreationTimestamp="2024-08-31 22:22:37 +0000 UTC" firstStartedPulling="2024-08-31 22:22:37.696060752 +0000 UTC m=+55.269661722" lastFinishedPulling="2024-08-31 22:22:38.44613257 +0000 UTC m=+56.019733581" observedRunningTime="2024-08-31 22:22:39.277479383 +0000 UTC m=+56.851080395" watchObservedRunningTime="2024-08-31 22:22:39.277524383 +0000 UTC m=+56.851125353"
	Aug 31 22:22:42 functional-221000 kubelet[6206]: E0831 22:22:42.513578    6206 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 31 22:22:42 functional-221000 kubelet[6206]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 31 22:22:42 functional-221000 kubelet[6206]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 31 22:22:42 functional-221000 kubelet[6206]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 31 22:22:42 functional-221000 kubelet[6206]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 31 22:22:42 functional-221000 kubelet[6206]: I0831 22:22:42.568347    6206 scope.go:117] "RemoveContainer" containerID="04b7ad0da85fe50245b6bfee6e7ccd476b7458e6a4736a64f6df7eea80ac3828"
	Aug 31 22:22:46 functional-221000 kubelet[6206]: I0831 22:22:46.378522    6206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-test-volume\") pod \"busybox-mount\" (UID: \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\") " pod="default/busybox-mount"
	Aug 31 22:22:46 functional-221000 kubelet[6206]: I0831 22:22:46.378551    6206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxvt\" (UniqueName: \"kubernetes.io/projected/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-kube-api-access-wgxvt\") pod \"busybox-mount\" (UID: \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\") " pod="default/busybox-mount"
	Aug 31 22:22:48 functional-221000 kubelet[6206]: I0831 22:22:48.504279    6206 scope.go:117] "RemoveContainer" containerID="40ade4d986c9365751cfc873885d3efc044bfb1df0757ab5abd4bee0a531797e"
	Aug 31 22:22:48 functional-221000 kubelet[6206]: E0831 22:22:48.504477    6206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-v99wv_default(7b164e4b-4dc7-4c3e-9604-070e8d2cf984)\"" pod="default/hello-node-connect-65d86f57f4-v99wv" podUID="7b164e4b-4dc7-4c3e-9604-070e8d2cf984"
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.719338    6206 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-test-volume" (OuterVolumeSpecName: "test-volume") pod "05265cce-6cbb-4b51-b5dd-82c7c3475f7d" (UID: "05265cce-6cbb-4b51-b5dd-82c7c3475f7d"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.719377    6206 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-test-volume\") pod \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\" (UID: \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\") "
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.719398    6206 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgxvt\" (UniqueName: \"kubernetes.io/projected/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-kube-api-access-wgxvt\") pod \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\" (UID: \"05265cce-6cbb-4b51-b5dd-82c7c3475f7d\") "
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.719422    6206 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-test-volume\") on node \"functional-221000\" DevicePath \"\""
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.723147    6206 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-kube-api-access-wgxvt" (OuterVolumeSpecName: "kube-api-access-wgxvt") pod "05265cce-6cbb-4b51-b5dd-82c7c3475f7d" (UID: "05265cce-6cbb-4b51-b5dd-82c7c3475f7d"). InnerVolumeSpecName "kube-api-access-wgxvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:22:49 functional-221000 kubelet[6206]: I0831 22:22:49.821414    6206 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wgxvt\" (UniqueName: \"kubernetes.io/projected/05265cce-6cbb-4b51-b5dd-82c7c3475f7d-kube-api-access-wgxvt\") on node \"functional-221000\" DevicePath \"\""
	Aug 31 22:22:50 functional-221000 kubelet[6206]: I0831 22:22:50.435837    6206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6db1063482efd4642c539550fbaa581d758703d51a976bbbe60f48d2043f5d3"
	
	
	==> storage-provisioner [4ba37f35ef49] <==
	I0831 22:21:02.137830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:21:02.148020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:21:02.148067       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:21:02.160037       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:21:02.160120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-221000_d299c7c8-54a7-4cec-8494-1fff197c3aaf!
	I0831 22:21:02.160165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ca52204-899a-41c6-a66a-bca85838d6e1", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-221000_d299c7c8-54a7-4cec-8494-1fff197c3aaf became leader
	I0831 22:21:02.260924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-221000_d299c7c8-54a7-4cec-8494-1fff197c3aaf!
	
	
	==> storage-provisioner [d8b4a7df5661] <==
	I0831 22:21:46.995609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:21:47.001293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:21:47.001313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:22:04.417689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:22:04.418035       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-221000_f32a3a0c-64be-4dfa-bf0a-e584f3200e8c!
	I0831 22:22:04.419484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ca52204-899a-41c6-a66a-bca85838d6e1", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-221000_f32a3a0c-64be-4dfa-bf0a-e584f3200e8c became leader
	I0831 22:22:04.519019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-221000_f32a3a0c-64be-4dfa-bf0a-e584f3200e8c!
	I0831 22:22:23.939151       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0831 22:22:23.939411       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"32f7029b-e8cc-40db-bb36-e8db7fe67a1a", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0831 22:22:23.939212       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    56339d57-510d-485d-9f94-de301b83c6d3 306 0 2024-08-31 22:20:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-31 22:20:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  32f7029b-e8cc-40db-bb36-e8db7fe67a1a 721 0 2024-08-31 22:22:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-31 22:22:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-31 22:22:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0831 22:22:23.939836       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a" provisioned
	I0831 22:22:23.939851       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0831 22:22:23.939858       1 volume_store.go:212] Trying to save persistentvolume "pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a"
	I0831 22:22:23.944102       1 volume_store.go:219] persistentvolume "pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a" saved
	I0831 22:22:23.944225       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"32f7029b-e8cc-40db-bb36-e8db7fe67a1a", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-32f7029b-e8cc-40db-bb36-e8db7fe67a1a
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-221000 -n functional-221000
helpers_test.go:262: (dbg) Run:  kubectl --context functional-221000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox-mount
helpers_test.go:275: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context functional-221000 describe pod busybox-mount
helpers_test.go:283: (dbg) kubectl --context functional-221000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-221000/192.168.105.4
	Start Time:       Sat, 31 Aug 2024 15:22:46 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://7dce4e3095d87843366bdb638383319174514893be217ab6f734e539939d5aed
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 31 Aug 2024 15:22:48 -0700
	      Finished:     Sat, 31 Aug 2024 15:22:48 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgxvt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wgxvt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/busybox-mount to functional-221000
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.411s (1.411s including waiting). Image size: 3547125 bytes.
	  Normal  Created    4s    kubelet            Created container mount-munger
	  Normal  Started    4s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 node stop m02 -v=7 --alsologtostderr
E0831 15:27:09.833431    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:12.396870    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:17.520302    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-166000 node stop m02 -v=7 --alsologtostderr: (12.189765375s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
E0831 15:27:27.763109    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:32.449374    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:48.246219    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:28:00.173468    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:28:29.208705    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:29:51.130134    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr: exit status 7 (2m55.977931s)

                                                
                                                
-- stdout --
	ha-166000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-166000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-166000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:27:21.234903    2716 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:27:21.235092    2716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:27:21.235096    2716 out.go:358] Setting ErrFile to fd 2...
	I0831 15:27:21.235098    2716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:27:21.235230    2716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:27:21.235367    2716 out.go:352] Setting JSON to false
	I0831 15:27:21.235389    2716 mustload.go:65] Loading cluster: ha-166000
	I0831 15:27:21.235443    2716 notify.go:220] Checking for updates...
	I0831 15:27:21.235635    2716 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:27:21.235643    2716 status.go:255] checking status of ha-166000 ...
	I0831 15:27:21.236361    2716 status.go:330] ha-166000 host status = "Running" (err=<nil>)
	I0831 15:27:21.236371    2716 host.go:66] Checking if "ha-166000" exists ...
	I0831 15:27:21.236485    2716 host.go:66] Checking if "ha-166000" exists ...
	I0831 15:27:21.236604    2716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:27:21.236613    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/id_rsa Username:docker}
	W0831 15:27:47.161110    2716 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0831 15:27:47.161248    2716 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0831 15:27:47.161268    2716 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0831 15:27:47.161292    2716 status.go:257] ha-166000 status: &{Name:ha-166000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:27:47.161315    2716 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0831 15:27:47.161326    2716 status.go:255] checking status of ha-166000-m02 ...
	I0831 15:27:47.161769    2716 status.go:330] ha-166000-m02 host status = "Stopped" (err=<nil>)
	I0831 15:27:47.161780    2716 status.go:343] host is not running, skipping remaining checks
	I0831 15:27:47.161785    2716 status.go:257] ha-166000-m02 status: &{Name:ha-166000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:27:47.161795    2716 status.go:255] checking status of ha-166000-m03 ...
	I0831 15:27:47.163007    2716 status.go:330] ha-166000-m03 host status = "Running" (err=<nil>)
	I0831 15:27:47.163018    2716 host.go:66] Checking if "ha-166000-m03" exists ...
	I0831 15:27:47.163244    2716 host.go:66] Checking if "ha-166000-m03" exists ...
	I0831 15:27:47.163495    2716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:27:47.163507    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m03/id_rsa Username:docker}
	W0831 15:29:02.163914    2716 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0831 15:29:02.163965    2716 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0831 15:29:02.163973    2716 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0831 15:29:02.163977    2716 status.go:257] ha-166000-m03 status: &{Name:ha-166000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:29:02.163998    2716 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0831 15:29:02.164001    2716 status.go:255] checking status of ha-166000-m04 ...
	I0831 15:29:02.164674    2716 status.go:330] ha-166000-m04 host status = "Running" (err=<nil>)
	I0831 15:29:02.164681    2716 host.go:66] Checking if "ha-166000-m04" exists ...
	I0831 15:29:02.164786    2716 host.go:66] Checking if "ha-166000-m04" exists ...
	I0831 15:29:02.164906    2716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:29:02.164912    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m04/id_rsa Username:docker}
	W0831 15:30:17.164297    2716 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0831 15:30:17.164347    2716 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0831 15:30:17.164355    2716 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0831 15:30:17.164387    2716 status.go:257] ha-166000-m04 status: &{Name:ha-166000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:30:17.164397    2716 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-166000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 3 (25.961523542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 15:30:43.125360    3051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0831 15:30:43.125370    3051 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.430113417s)
ha_test.go:413: expected profile "ha-166000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-166000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-166000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-166000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
E0831 15:32:07.247803    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 3 (25.959326041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 15:32:27.512696    3092 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0831 15:32:27.512712    3092 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (182.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 node start m02 -v=7 --alsologtostderr
E0831 15:32:32.441905    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.082600125s)

                                                
                                                
-- stdout --
	* Starting "ha-166000-m02" control-plane node in "ha-166000" cluster
	* Restarting existing qemu2 VM for "ha-166000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-166000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:32:27.546625    3098 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:32:27.547056    3098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:32:27.547062    3098 out.go:358] Setting ErrFile to fd 2...
	I0831 15:32:27.547065    3098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:32:27.547183    3098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:32:27.547585    3098 mustload.go:65] Loading cluster: ha-166000
	I0831 15:32:27.547831    3098 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0831 15:32:27.548065    3098 host.go:58] "ha-166000-m02" host status: Stopped
	I0831 15:32:27.551800    3098 out.go:177] * Starting "ha-166000-m02" control-plane node in "ha-166000" cluster
	I0831 15:32:27.554809    3098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:32:27.554821    3098 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:32:27.554835    3098 cache.go:56] Caching tarball of preloaded images
	I0831 15:32:27.554907    3098 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:32:27.554912    3098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:32:27.554973    3098 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/ha-166000/config.json ...
	I0831 15:32:27.556679    3098 start.go:360] acquireMachinesLock for ha-166000-m02: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:32:27.556725    3098 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "ha-166000-m02"
	I0831 15:32:27.556734    3098 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:32:27.556740    3098 fix.go:54] fixHost starting: m02
	I0831 15:32:27.556889    3098 fix.go:112] recreateIfNeeded on ha-166000-m02: state=Stopped err=<nil>
	W0831 15:32:27.556894    3098 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:32:27.560810    3098 out.go:177] * Restarting existing qemu2 VM for "ha-166000-m02" ...
	I0831 15:32:27.563781    3098 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:32:27.563816    3098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:3f:40:0e:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/disk.qcow2
	I0831 15:32:27.566106    3098 main.go:141] libmachine: STDOUT: 
	I0831 15:32:27.566126    3098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:32:27.566151    3098 fix.go:56] duration metric: took 9.412042ms for fixHost
	I0831 15:32:27.566155    3098 start.go:83] releasing machines lock for "ha-166000-m02", held for 9.425417ms
	W0831 15:32:27.566185    3098 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:32:27.566216    3098 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:32:27.566219    3098 start.go:729] Will try again in 5 seconds ...
	I0831 15:32:32.568140    3098 start.go:360] acquireMachinesLock for ha-166000-m02: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:32:32.568235    3098 start.go:364] duration metric: took 80.208µs to acquireMachinesLock for "ha-166000-m02"
	I0831 15:32:32.568260    3098 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:32:32.568264    3098 fix.go:54] fixHost starting: m02
	I0831 15:32:32.568426    3098 fix.go:112] recreateIfNeeded on ha-166000-m02: state=Stopped err=<nil>
	W0831 15:32:32.568431    3098 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:32:32.572760    3098 out.go:177] * Restarting existing qemu2 VM for "ha-166000-m02" ...
	I0831 15:32:32.575721    3098 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:32:32.575766    3098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:3f:40:0e:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/disk.qcow2
	I0831 15:32:32.577965    3098 main.go:141] libmachine: STDOUT: 
	I0831 15:32:32.577982    3098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:32:32.578001    3098 fix.go:56] duration metric: took 9.737708ms for fixHost
	I0831 15:32:32.578004    3098 start.go:83] releasing machines lock for "ha-166000-m02", held for 9.764083ms
	W0831 15:32:32.578069    3098 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:32:32.581719    3098 out.go:201] 
	W0831 15:32:32.585778    3098 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:32:32.585784    3098 out.go:270] * 
	* 
	W0831 15:32:32.588096    3098 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:32:32.591725    3098 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0831 15:32:27.546625    3098 out.go:345] Setting OutFile to fd 1 ...
I0831 15:32:27.547056    3098 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:32:27.547062    3098 out.go:358] Setting ErrFile to fd 2...
I0831 15:32:27.547065    3098 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:32:27.547183    3098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:32:27.547585    3098 mustload.go:65] Loading cluster: ha-166000
I0831 15:32:27.547831    3098 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0831 15:32:27.548065    3098 host.go:58] "ha-166000-m02" host status: Stopped
I0831 15:32:27.551800    3098 out.go:177] * Starting "ha-166000-m02" control-plane node in "ha-166000" cluster
I0831 15:32:27.554809    3098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0831 15:32:27.554821    3098 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0831 15:32:27.554835    3098 cache.go:56] Caching tarball of preloaded images
I0831 15:32:27.554907    3098 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0831 15:32:27.554912    3098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0831 15:32:27.554973    3098 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/ha-166000/config.json ...
I0831 15:32:27.556679    3098 start.go:360] acquireMachinesLock for ha-166000-m02: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0831 15:32:27.556725    3098 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "ha-166000-m02"
I0831 15:32:27.556734    3098 start.go:96] Skipping create...Using existing machine configuration
I0831 15:32:27.556740    3098 fix.go:54] fixHost starting: m02
I0831 15:32:27.556889    3098 fix.go:112] recreateIfNeeded on ha-166000-m02: state=Stopped err=<nil>
W0831 15:32:27.556894    3098 fix.go:138] unexpected machine state, will restart: <nil>
I0831 15:32:27.560810    3098 out.go:177] * Restarting existing qemu2 VM for "ha-166000-m02" ...
I0831 15:32:27.563781    3098 qemu.go:418] Using hvf for hardware acceleration
I0831 15:32:27.563816    3098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:3f:40:0e:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/disk.qcow2
I0831 15:32:27.566106    3098 main.go:141] libmachine: STDOUT: 
I0831 15:32:27.566126    3098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0831 15:32:27.566151    3098 fix.go:56] duration metric: took 9.412042ms for fixHost
I0831 15:32:27.566155    3098 start.go:83] releasing machines lock for "ha-166000-m02", held for 9.425417ms
W0831 15:32:27.566185    3098 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0831 15:32:27.566216    3098 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0831 15:32:27.566219    3098 start.go:729] Will try again in 5 seconds ...
I0831 15:32:32.568140    3098 start.go:360] acquireMachinesLock for ha-166000-m02: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0831 15:32:32.568235    3098 start.go:364] duration metric: took 80.208µs to acquireMachinesLock for "ha-166000-m02"
I0831 15:32:32.568260    3098 start.go:96] Skipping create...Using existing machine configuration
I0831 15:32:32.568264    3098 fix.go:54] fixHost starting: m02
I0831 15:32:32.568426    3098 fix.go:112] recreateIfNeeded on ha-166000-m02: state=Stopped err=<nil>
W0831 15:32:32.568431    3098 fix.go:138] unexpected machine state, will restart: <nil>
I0831 15:32:32.572760    3098 out.go:177] * Restarting existing qemu2 VM for "ha-166000-m02" ...
I0831 15:32:32.575721    3098 qemu.go:418] Using hvf for hardware acceleration
I0831 15:32:32.575766    3098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:3f:40:0e:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m02/disk.qcow2
I0831 15:32:32.577965    3098 main.go:141] libmachine: STDOUT: 
I0831 15:32:32.577982    3098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0831 15:32:32.578001    3098 fix.go:56] duration metric: took 9.737708ms for fixHost
I0831 15:32:32.578004    3098 start.go:83] releasing machines lock for "ha-166000-m02", held for 9.764083ms
W0831 15:32:32.578069    3098 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0831 15:32:32.581719    3098 out.go:201] 
W0831 15:32:32.585778    3098 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0831 15:32:32.585784    3098 out.go:270] * 
* 
W0831 15:32:32.588096    3098 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0831 15:32:32.591725    3098 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-166000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
E0831 15:32:34.969352    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr: exit status 7 (2m31.794791625s)

                                                
                                                
-- stdout --
	ha-166000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-166000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-166000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:32:32.628234    3102 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:32:32.628601    3102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:32:32.628613    3102 out.go:358] Setting ErrFile to fd 2...
	I0831 15:32:32.628616    3102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:32:32.628765    3102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:32:32.628877    3102 out.go:352] Setting JSON to false
	I0831 15:32:32.628891    3102 mustload.go:65] Loading cluster: ha-166000
	I0831 15:32:32.628938    3102 notify.go:220] Checking for updates...
	I0831 15:32:32.629129    3102 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:32:32.629138    3102 status.go:255] checking status of ha-166000 ...
	I0831 15:32:32.629829    3102 status.go:330] ha-166000 host status = "Running" (err=<nil>)
	I0831 15:32:32.629839    3102 host.go:66] Checking if "ha-166000" exists ...
	I0831 15:32:32.629943    3102 host.go:66] Checking if "ha-166000" exists ...
	I0831 15:32:32.630054    3102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:32:32.630061    3102 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/id_rsa Username:docker}
	W0831 15:32:32.630231    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:32.630246    3102 retry.go:31] will retry after 171.6762ms: dial tcp 192.168.105.5:22: connect: host is down
	W0831 15:32:32.802794    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:32.802816    3102 retry.go:31] will retry after 199.559596ms: dial tcp 192.168.105.5:22: connect: host is down
	W0831 15:32:33.003736    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:33.003758    3102 retry.go:31] will retry after 577.182194ms: dial tcp 192.168.105.5:22: connect: host is down
	W0831 15:32:33.582425    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:33.582452    3102 retry.go:31] will retry after 788.45475ms: dial tcp 192.168.105.5:22: connect: host is down
	W0831 15:32:34.373073    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0831 15:32:34.373154    3102 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0831 15:32:34.373165    3102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:34.373169    3102 status.go:257] ha-166000 status: &{Name:ha-166000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:32:34.373186    3102 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0831 15:32:34.373190    3102 status.go:255] checking status of ha-166000-m02 ...
	I0831 15:32:34.373352    3102 status.go:330] ha-166000-m02 host status = "Stopped" (err=<nil>)
	I0831 15:32:34.373358    3102 status.go:343] host is not running, skipping remaining checks
	I0831 15:32:34.373360    3102 status.go:257] ha-166000-m02 status: &{Name:ha-166000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:32:34.373364    3102 status.go:255] checking status of ha-166000-m03 ...
	I0831 15:32:34.374072    3102 status.go:330] ha-166000-m03 host status = "Running" (err=<nil>)
	I0831 15:32:34.374081    3102 host.go:66] Checking if "ha-166000-m03" exists ...
	I0831 15:32:34.374214    3102 host.go:66] Checking if "ha-166000-m03" exists ...
	I0831 15:32:34.374345    3102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:32:34.374353    3102 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m03/id_rsa Username:docker}
	W0831 15:33:49.374897    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0831 15:33:49.375075    3102 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0831 15:33:49.375114    3102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0831 15:33:49.375136    3102 status.go:257] ha-166000-m03 status: &{Name:ha-166000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:33:49.375178    3102 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0831 15:33:49.375198    3102 status.go:255] checking status of ha-166000-m04 ...
	I0831 15:33:49.378319    3102 status.go:330] ha-166000-m04 host status = "Running" (err=<nil>)
	I0831 15:33:49.378348    3102 host.go:66] Checking if "ha-166000-m04" exists ...
	I0831 15:33:49.379004    3102 host.go:66] Checking if "ha-166000-m04" exists ...
	I0831 15:33:49.379619    3102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 15:33:49.379652    3102 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000-m04/id_rsa Username:docker}
	W0831 15:35:04.381184    3102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0831 15:35:04.381378    3102 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0831 15:35:04.381421    3102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0831 15:35:04.381445    3102 status.go:257] ha-166000-m04 status: &{Name:ha-166000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0831 15:35:04.381497    3102 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 3 (25.998857542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 15:35:30.382044    3147 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0831 15:35:30.382079    3147 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (182.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-166000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-166000 -v=7 --alsologtostderr
E0831 15:37:07.240581    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:37:32.434793    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:38:55.519620    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-166000 -v=7 --alsologtostderr: (3m49.027649375s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-166000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-166000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.228967917s)

                                                
                                                
-- stdout --
	* [ha-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-166000" primary control-plane node in "ha-166000" cluster
	* Restarting existing qemu2 VM for "ha-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:40:38.408430    3277 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:40:38.408614    3277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:38.408619    3277 out.go:358] Setting ErrFile to fd 2...
	I0831 15:40:38.408622    3277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:38.408795    3277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:40:38.410110    3277 out.go:352] Setting JSON to false
	I0831 15:40:38.429697    3277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2402,"bootTime":1725141636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:40:38.429771    3277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:40:38.435038    3277 out.go:177] * [ha-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:40:38.442981    3277 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:40:38.443022    3277 notify.go:220] Checking for updates...
	I0831 15:40:38.451937    3277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:40:38.454906    3277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:40:38.457894    3277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:40:38.461015    3277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:40:38.462096    3277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:40:38.465265    3277 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:40:38.465330    3277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:40:38.469909    3277 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:40:38.474931    3277 start.go:297] selected driver: qemu2
	I0831 15:40:38.474940    3277 start.go:901] validating driver "qemu2" against &{Name:ha-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-166000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:40:38.475033    3277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:40:38.477761    3277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:40:38.477788    3277 cni.go:84] Creating CNI manager for ""
	I0831 15:40:38.477796    3277 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 15:40:38.477856    3277 start.go:340] cluster config:
	{Name:ha-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-166000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:40:38.481965    3277 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:40:38.490897    3277 out.go:177] * Starting "ha-166000" primary control-plane node in "ha-166000" cluster
	I0831 15:40:38.494930    3277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:40:38.494945    3277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:40:38.494957    3277 cache.go:56] Caching tarball of preloaded images
	I0831 15:40:38.495013    3277 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:40:38.495018    3277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:40:38.495089    3277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/ha-166000/config.json ...
	I0831 15:40:38.495568    3277 start.go:360] acquireMachinesLock for ha-166000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:40:38.495605    3277 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "ha-166000"
	I0831 15:40:38.495614    3277 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:40:38.495619    3277 fix.go:54] fixHost starting: 
	I0831 15:40:38.495742    3277 fix.go:112] recreateIfNeeded on ha-166000: state=Stopped err=<nil>
	W0831 15:40:38.495751    3277 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:40:38.499901    3277 out.go:177] * Restarting existing qemu2 VM for "ha-166000" ...
	I0831 15:40:38.507915    3277 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:40:38.507952    3277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:fd:e4:89:24:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/disk.qcow2
	I0831 15:40:38.510060    3277 main.go:141] libmachine: STDOUT: 
	I0831 15:40:38.510087    3277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:40:38.510119    3277 fix.go:56] duration metric: took 14.501666ms for fixHost
	I0831 15:40:38.510123    3277 start.go:83] releasing machines lock for "ha-166000", held for 14.514083ms
	W0831 15:40:38.510130    3277 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:40:38.510160    3277 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:40:38.510165    3277 start.go:729] Will try again in 5 seconds ...
	I0831 15:40:43.512193    3277 start.go:360] acquireMachinesLock for ha-166000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:40:43.512668    3277 start.go:364] duration metric: took 358.708µs to acquireMachinesLock for "ha-166000"
	I0831 15:40:43.512932    3277 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:40:43.512951    3277 fix.go:54] fixHost starting: 
	I0831 15:40:43.513713    3277 fix.go:112] recreateIfNeeded on ha-166000: state=Stopped err=<nil>
	W0831 15:40:43.513739    3277 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:40:43.517213    3277 out.go:177] * Restarting existing qemu2 VM for "ha-166000" ...
	I0831 15:40:43.525218    3277 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:40:43.525453    3277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:fd:e4:89:24:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/disk.qcow2
	I0831 15:40:43.534536    3277 main.go:141] libmachine: STDOUT: 
	I0831 15:40:43.534615    3277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:40:43.534687    3277 fix.go:56] duration metric: took 21.737375ms for fixHost
	I0831 15:40:43.534709    3277 start.go:83] releasing machines lock for "ha-166000", held for 21.932083ms
	W0831 15:40:43.534905    3277 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:40:43.543199    3277 out.go:201] 
	W0831 15:40:43.547187    3277 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:40:43.547211    3277 out.go:270] * 
	* 
	W0831 15:40:43.550037    3277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:40:43.554182    3277 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-166000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-166000
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (32.248583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.089375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-166000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-166000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:40:43.695958    3290 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:40:43.696189    3290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:43.696193    3290 out.go:358] Setting ErrFile to fd 2...
	I0831 15:40:43.696195    3290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:43.696314    3290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:40:43.696522    3290 mustload.go:65] Loading cluster: ha-166000
	I0831 15:40:43.696750    3290 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0831 15:40:43.697045    3290 out.go:270] ! The control-plane node ha-166000 host is not running (will try others): state=Stopped
	! The control-plane node ha-166000 host is not running (will try others): state=Stopped
	W0831 15:40:43.697148    3290 out.go:270] ! The control-plane node ha-166000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-166000-m02 host is not running (will try others): state=Stopped
	I0831 15:40:43.701196    3290 out.go:177] * The control-plane node ha-166000-m03 host is not running: state=Stopped
	I0831 15:40:43.704379    3290 out.go:177]   To start a cluster, run: "minikube start -p ha-166000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-166000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr: exit status 7 (29.904583ms)

                                                
                                                
-- stdout --
	ha-166000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:40:43.736173    3292 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:40:43.736332    3292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:43.736336    3292 out.go:358] Setting ErrFile to fd 2...
	I0831 15:40:43.736338    3292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:40:43.736457    3292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:40:43.736571    3292 out.go:352] Setting JSON to false
	I0831 15:40:43.736582    3292 mustload.go:65] Loading cluster: ha-166000
	I0831 15:40:43.736625    3292 notify.go:220] Checking for updates...
	I0831 15:40:43.736798    3292 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:40:43.736805    3292 status.go:255] checking status of ha-166000 ...
	I0831 15:40:43.737008    3292 status.go:330] ha-166000 host status = "Stopped" (err=<nil>)
	I0831 15:40:43.737011    3292 status.go:343] host is not running, skipping remaining checks
	I0831 15:40:43.737014    3292 status.go:257] ha-166000 status: &{Name:ha-166000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:40:43.737023    3292 status.go:255] checking status of ha-166000-m02 ...
	I0831 15:40:43.737110    3292 status.go:330] ha-166000-m02 host status = "Stopped" (err=<nil>)
	I0831 15:40:43.737113    3292 status.go:343] host is not running, skipping remaining checks
	I0831 15:40:43.737115    3292 status.go:257] ha-166000-m02 status: &{Name:ha-166000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:40:43.737119    3292 status.go:255] checking status of ha-166000-m03 ...
	I0831 15:40:43.737203    3292 status.go:330] ha-166000-m03 host status = "Stopped" (err=<nil>)
	I0831 15:40:43.737205    3292 status.go:343] host is not running, skipping remaining checks
	I0831 15:40:43.737207    3292 status.go:257] ha-166000-m03 status: &{Name:ha-166000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:40:43.737211    3292 status.go:255] checking status of ha-166000-m04 ...
	I0831 15:40:43.737307    3292 status.go:330] ha-166000-m04 host status = "Stopped" (err=<nil>)
	I0831 15:40:43.737312    3292 status.go:343] host is not running, skipping remaining checks
	I0831 15:40:43.737314    3292 status.go:257] ha-166000-m04 status: &{Name:ha-166000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (29.488667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.184235792s)
ha_test.go:413: expected profile "ha-166000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-166000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-166000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-166000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (51.63575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 stop -v=7 --alsologtostderr
E0831 15:42:07.233079    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:42:32.426956    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:43:30.316003    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-166000 stop -v=7 --alsologtostderr: (5m0.140782s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr: exit status 7 (67.777125ms)

                                                
                                                
-- stdout --
	ha-166000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:45:45.203027    3421 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:45:45.203260    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:45.203265    3421 out.go:358] Setting ErrFile to fd 2...
	I0831 15:45:45.203268    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:45.203453    3421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:45:45.203633    3421 out.go:352] Setting JSON to false
	I0831 15:45:45.203646    3421 mustload.go:65] Loading cluster: ha-166000
	I0831 15:45:45.203682    3421 notify.go:220] Checking for updates...
	I0831 15:45:45.203980    3421 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:45:45.203992    3421 status.go:255] checking status of ha-166000 ...
	I0831 15:45:45.204268    3421 status.go:330] ha-166000 host status = "Stopped" (err=<nil>)
	I0831 15:45:45.204273    3421 status.go:343] host is not running, skipping remaining checks
	I0831 15:45:45.204276    3421 status.go:257] ha-166000 status: &{Name:ha-166000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:45:45.204290    3421 status.go:255] checking status of ha-166000-m02 ...
	I0831 15:45:45.204422    3421 status.go:330] ha-166000-m02 host status = "Stopped" (err=<nil>)
	I0831 15:45:45.204427    3421 status.go:343] host is not running, skipping remaining checks
	I0831 15:45:45.204430    3421 status.go:257] ha-166000-m02 status: &{Name:ha-166000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:45:45.204435    3421 status.go:255] checking status of ha-166000-m03 ...
	I0831 15:45:45.204562    3421 status.go:330] ha-166000-m03 host status = "Stopped" (err=<nil>)
	I0831 15:45:45.204566    3421 status.go:343] host is not running, skipping remaining checks
	I0831 15:45:45.204568    3421 status.go:257] ha-166000-m03 status: &{Name:ha-166000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 15:45:45.204573    3421 status.go:255] checking status of ha-166000-m04 ...
	I0831 15:45:45.204691    3421 status.go:330] ha-166000-m04 host status = "Stopped" (err=<nil>)
	I0831 15:45:45.204695    3421 status.go:343] host is not running, skipping remaining checks
	I0831 15:45:45.204697    3421 status.go:257] ha-166000-m04 status: &{Name:ha-166000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr": ha-166000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-166000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (32.956791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-166000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-166000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186378625s)

                                                
                                                
-- stdout --
	* [ha-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-166000" primary control-plane node in "ha-166000" cluster
	* Restarting existing qemu2 VM for "ha-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:45:45.267435    3425 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:45:45.267586    3425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:45.267589    3425 out.go:358] Setting ErrFile to fd 2...
	I0831 15:45:45.267591    3425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:45.267726    3425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:45:45.268776    3425 out.go:352] Setting JSON to false
	I0831 15:45:45.284940    3425 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2709,"bootTime":1725141636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:45:45.285005    3425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:45:45.290653    3425 out.go:177] * [ha-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:45:45.297598    3425 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:45:45.297652    3425 notify.go:220] Checking for updates...
	I0831 15:45:45.305473    3425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:45:45.308604    3425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:45:45.312573    3425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:45:45.313852    3425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:45:45.316566    3425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:45:45.319910    3425 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:45:45.320158    3425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:45:45.324477    3425 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:45:45.331641    3425 start.go:297] selected driver: qemu2
	I0831 15:45:45.331648    3425 start.go:901] validating driver "qemu2" against &{Name:ha-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-166000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:45:45.331767    3425 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:45:45.334041    3425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:45:45.334082    3425 cni.go:84] Creating CNI manager for ""
	I0831 15:45:45.334088    3425 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 15:45:45.334141    3425 start.go:340] cluster config:
	{Name:ha-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-166000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:45:45.337687    3425 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:45:45.346570    3425 out.go:177] * Starting "ha-166000" primary control-plane node in "ha-166000" cluster
	I0831 15:45:45.350586    3425 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:45:45.350599    3425 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:45:45.350608    3425 cache.go:56] Caching tarball of preloaded images
	I0831 15:45:45.350657    3425 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:45:45.350663    3425 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:45:45.350733    3425 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/ha-166000/config.json ...
	I0831 15:45:45.351217    3425 start.go:360] acquireMachinesLock for ha-166000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:45:45.351254    3425 start.go:364] duration metric: took 31.166µs to acquireMachinesLock for "ha-166000"
	I0831 15:45:45.351263    3425 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:45:45.351273    3425 fix.go:54] fixHost starting: 
	I0831 15:45:45.351390    3425 fix.go:112] recreateIfNeeded on ha-166000: state=Stopped err=<nil>
	W0831 15:45:45.351400    3425 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:45:45.355597    3425 out.go:177] * Restarting existing qemu2 VM for "ha-166000" ...
	I0831 15:45:45.363575    3425 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:45:45.363619    3425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:fd:e4:89:24:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/disk.qcow2
	I0831 15:45:45.365789    3425 main.go:141] libmachine: STDOUT: 
	I0831 15:45:45.365811    3425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:45:45.365840    3425 fix.go:56] duration metric: took 14.568125ms for fixHost
	I0831 15:45:45.365846    3425 start.go:83] releasing machines lock for "ha-166000", held for 14.58675ms
	W0831 15:45:45.365851    3425 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:45:45.365893    3425 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:45:45.365898    3425 start.go:729] Will try again in 5 seconds ...
	I0831 15:45:50.367692    3425 start.go:360] acquireMachinesLock for ha-166000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:45:50.368133    3425 start.go:364] duration metric: took 336.709µs to acquireMachinesLock for "ha-166000"
	I0831 15:45:50.368280    3425 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:45:50.368302    3425 fix.go:54] fixHost starting: 
	I0831 15:45:50.369065    3425 fix.go:112] recreateIfNeeded on ha-166000: state=Stopped err=<nil>
	W0831 15:45:50.369091    3425 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:45:50.372551    3425 out.go:177] * Restarting existing qemu2 VM for "ha-166000" ...
	I0831 15:45:50.380419    3425 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:45:50.380659    3425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:fd:e4:89:24:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/ha-166000/disk.qcow2
	I0831 15:45:50.390222    3425 main.go:141] libmachine: STDOUT: 
	I0831 15:45:50.390317    3425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:45:50.390422    3425 fix.go:56] duration metric: took 22.12175ms for fixHost
	I0831 15:45:50.390466    3425 start.go:83] releasing machines lock for "ha-166000", held for 22.288417ms
	W0831 15:45:50.390670    3425 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:45:50.399507    3425 out.go:201] 
	W0831 15:45:50.403554    3425 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:45:50.403578    3425 out.go:270] * 
	* 
	W0831 15:45:50.406068    3425 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:45:50.413522    3425 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-166000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (71.224042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-166000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-166000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-166000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-166000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (52.274333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-166000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-166000 --control-plane -v=7 --alsologtostderr: exit status 83 (63.424375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-166000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-166000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:45:51.568542    3467 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:45:51.568692    3467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:51.568696    3467 out.go:358] Setting ErrFile to fd 2...
	I0831 15:45:51.568698    3467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:45:51.568843    3467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:45:51.569089    3467 mustload.go:65] Loading cluster: ha-166000
	I0831 15:45:51.569312    3467 config.go:182] Loaded profile config "ha-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0831 15:45:51.569644    3467 out.go:270] ! The control-plane node ha-166000 host is not running (will try others): state=Stopped
	! The control-plane node ha-166000 host is not running (will try others): state=Stopped
	W0831 15:45:51.569755    3467 out.go:270] ! The control-plane node ha-166000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-166000-m02 host is not running (will try others): state=Stopped
	I0831 15:45:51.575034    3467 out.go:177] * The control-plane node ha-166000-m03 host is not running: state=Stopped
	I0831 15:45:51.583003    3467 out.go:177]   To start a cluster, run: "minikube start -p ha-166000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-166000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-166000 -n ha-166000: exit status 7 (33.732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "ha-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-215000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-215000 --driver=qemu2 : exit status 80 (10.036298167s)

                                                
                                                
-- stdout --
	* [image-215000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-215000" primary control-plane node in "image-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-215000 --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-215000 -n image-215000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-215000 -n image-215000: exit status 7 (67.8405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "image-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-629000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-629000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.867029333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64855923-9711-4178-be27-310c46ae214b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-629000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bddb91b-04be-472a-820f-c2e5c73b2ad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"e64162cf-b477-4d44-8a00-5e84b07ebed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig"}}
	{"specversion":"1.0","id":"b374ba80-2ce3-494c-8951-5949ae8ea6a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8a84eb29-204c-4bc3-ba12-ad768730bf1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"357ea8d8-3e5f-44e2-8513-0f060027adbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube"}}
	{"specversion":"1.0","id":"7811229d-df8b-4693-aca8-d859539d1517","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fe4641bb-36b7-4465-8b7f-136671118df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"98cdcd90-7163-4827-9568-33f5a4430f0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"797d439c-a148-4469-beb5-e6424a55f2d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-629000\" primary control-plane node in \"json-output-629000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"367909df-a7d5-4eec-be8b-e26046843e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"16c30572-6a66-47c1-ae10-4f6284e00ba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-629000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"09adba59-d01e-4fff-b222-390d4f292415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a8042c2f-a0f8-4587-bb4f-a8fd915b304a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"83ca862e-5bac-48ff-bba5-e3ac982966b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-629000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a384a755-31f8-4547-ab0c-fbf155a95849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d04c5b32-ee02-4253-b52c-94ec176d2a0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-629000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-629000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-629000 --output=json --user=testUser: exit status 83 (78.338209ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce5d752a-30bc-47f7-a4d2-513bcf6668b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-629000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"fa6bc18e-344c-4b27-b0c2-104bd18de1ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-629000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-629000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-629000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-629000 --output=json --user=testUser: exit status 83 (45.240917ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-629000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-629000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-629000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-629000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-391000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-391000 --driver=qemu2 : exit status 80 (9.975824209s)

                                                
                                                
-- stdout --
	* [first-391000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-391000" primary control-plane node in "first-391000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-391000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-391000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-31 15:46:26.23838 -0700 PDT m=+2493.173538626
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-393000 -n second-393000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-393000 -n second-393000: exit status 85 (81.380417ms)

                                                
                                                
-- stdout --
	* Profile "second-393000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-393000"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "second-393000" host is not running, skipping log retrieval (state="* Profile \"second-393000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-393000\"")
helpers_test.go:176: Cleaning up "second-393000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-393000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-31 15:46:26.424679 -0700 PDT m=+2493.359842001
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-391000 -n first-391000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-391000 -n first-391000: exit status 7 (30.781583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "first-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "first-391000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-391000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-675000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-675000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.972077083s)

                                                
                                                
-- stdout --
	* [mount-start-1-675000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-675000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-675000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-675000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-675000 -n mount-start-1-675000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-675000 -n mount-start-1-675000: exit status 7 (66.0195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "mount-start-1-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.969477958s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:46:36.783873    3616 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:46:36.784012    3616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:46:36.784015    3616 out.go:358] Setting ErrFile to fd 2...
	I0831 15:46:36.784018    3616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:46:36.784156    3616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:46:36.785177    3616 out.go:352] Setting JSON to false
	I0831 15:46:36.801232    3616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2760,"bootTime":1725141636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:46:36.801299    3616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:46:36.805982    3616 out.go:177] * [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:46:36.814493    3616 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:46:36.814541    3616 notify.go:220] Checking for updates...
	I0831 15:46:36.821434    3616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:46:36.824507    3616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:46:36.827471    3616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:46:36.830500    3616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:46:36.833500    3616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:46:36.835068    3616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:46:36.839429    3616 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:46:36.846350    3616 start.go:297] selected driver: qemu2
	I0831 15:46:36.846356    3616 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:46:36.846364    3616 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:46:36.848588    3616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:46:36.852427    3616 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:46:36.855523    3616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:46:36.855555    3616 cni.go:84] Creating CNI manager for ""
	I0831 15:46:36.855559    3616 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0831 15:46:36.855567    3616 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 15:46:36.855603    3616 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:46:36.859360    3616 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:46:36.867454    3616 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0831 15:46:36.871506    3616 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:46:36.871520    3616 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:46:36.871529    3616 cache.go:56] Caching tarball of preloaded images
	I0831 15:46:36.871593    3616 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:46:36.871598    3616 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:46:36.871802    3616 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/multinode-334000/config.json ...
	I0831 15:46:36.871814    3616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/multinode-334000/config.json: {Name:mkd203d425b4085e7d6d8573e17b41a9700ddd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:46:36.872037    3616 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:46:36.872074    3616 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "multinode-334000"
	I0831 15:46:36.872085    3616 start.go:93] Provisioning new machine with config: &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:46:36.872116    3616 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:46:36.880468    3616 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:46:36.898549    3616 start.go:159] libmachine.API.Create for "multinode-334000" (driver="qemu2")
	I0831 15:46:36.898579    3616 client.go:168] LocalClient.Create starting
	I0831 15:46:36.898644    3616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:46:36.898674    3616 main.go:141] libmachine: Decoding PEM data...
	I0831 15:46:36.898684    3616 main.go:141] libmachine: Parsing certificate...
	I0831 15:46:36.898719    3616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:46:36.898745    3616 main.go:141] libmachine: Decoding PEM data...
	I0831 15:46:36.898756    3616 main.go:141] libmachine: Parsing certificate...
	I0831 15:46:36.899108    3616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:46:37.055289    3616 main.go:141] libmachine: Creating SSH key...
	I0831 15:46:37.255519    3616 main.go:141] libmachine: Creating Disk image...
	I0831 15:46:37.255525    3616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:46:37.255753    3616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:37.265191    3616 main.go:141] libmachine: STDOUT: 
	I0831 15:46:37.265210    3616 main.go:141] libmachine: STDERR: 
	I0831 15:46:37.265274    3616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2 +20000M
	I0831 15:46:37.273075    3616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:46:37.273090    3616 main.go:141] libmachine: STDERR: 
	I0831 15:46:37.273105    3616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:37.273110    3616 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:46:37.273124    3616 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:46:37.273149    3616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a5:5b:9e:65:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:37.274703    3616 main.go:141] libmachine: STDOUT: 
	I0831 15:46:37.274721    3616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:46:37.274742    3616 client.go:171] duration metric: took 376.166125ms to LocalClient.Create
	I0831 15:46:39.276909    3616 start.go:128] duration metric: took 2.404830375s to createHost
	I0831 15:46:39.276951    3616 start.go:83] releasing machines lock for "multinode-334000", held for 2.404928583s
	W0831 15:46:39.277008    3616 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:46:39.290072    3616 out.go:177] * Deleting "multinode-334000" in qemu2 ...
	W0831 15:46:39.320025    3616 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:46:39.320046    3616 start.go:729] Will try again in 5 seconds ...
	I0831 15:46:44.321195    3616 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:46:44.321669    3616 start.go:364] duration metric: took 368.792µs to acquireMachinesLock for "multinode-334000"
	I0831 15:46:44.321831    3616 start.go:93] Provisioning new machine with config: &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:46:44.322117    3616 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:46:44.331878    3616 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:46:44.382112    3616 start.go:159] libmachine.API.Create for "multinode-334000" (driver="qemu2")
	I0831 15:46:44.382155    3616 client.go:168] LocalClient.Create starting
	I0831 15:46:44.382267    3616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:46:44.382354    3616 main.go:141] libmachine: Decoding PEM data...
	I0831 15:46:44.382369    3616 main.go:141] libmachine: Parsing certificate...
	I0831 15:46:44.382425    3616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:46:44.382470    3616 main.go:141] libmachine: Decoding PEM data...
	I0831 15:46:44.382484    3616 main.go:141] libmachine: Parsing certificate...
	I0831 15:46:44.383204    3616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:46:44.550508    3616 main.go:141] libmachine: Creating SSH key...
	I0831 15:46:44.657776    3616 main.go:141] libmachine: Creating Disk image...
	I0831 15:46:44.657782    3616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:46:44.657967    3616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:44.667044    3616 main.go:141] libmachine: STDOUT: 
	I0831 15:46:44.667063    3616 main.go:141] libmachine: STDERR: 
	I0831 15:46:44.667110    3616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2 +20000M
	I0831 15:46:44.675001    3616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:46:44.675018    3616 main.go:141] libmachine: STDERR: 
	I0831 15:46:44.675030    3616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:44.675035    3616 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:46:44.675045    3616 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:46:44.675076    3616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9e:82:74:59:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:46:44.676628    3616 main.go:141] libmachine: STDOUT: 
	I0831 15:46:44.676645    3616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:46:44.676657    3616 client.go:171] duration metric: took 294.505292ms to LocalClient.Create
	I0831 15:46:46.678766    3616 start.go:128] duration metric: took 2.356682333s to createHost
	I0831 15:46:46.678868    3616 start.go:83] releasing machines lock for "multinode-334000", held for 2.357173167s
	W0831 15:46:46.679138    3616 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:46:46.688671    3616 out.go:201] 
	W0831 15:46:46.698792    3616 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:46:46.698833    3616 out.go:270] * 
	* 
	W0831 15:46:46.700859    3616 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:46:46.710645    3616 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (66.8895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.728333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-334000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- rollout status deployment/busybox: exit status 1 (58.769917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.709875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.745334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.895375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.383084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.001833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.749834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0831 15:47:07.225491    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.410291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.291167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0831 15:47:32.418970    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.917125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.887666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.657125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.346125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.337416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.674042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.807041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.251292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.02575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.80375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr: exit status 83 (41.982833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:36.002996    3724 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:36.003362    3724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.003366    3724 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:36.003369    3724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.003557    3724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:36.003839    3724 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:36.004199    3724 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:36.008797    3724 out.go:177] * The control-plane node multinode-334000 host is not running: state=Stopped
	I0831 15:48:36.011741    3724 out.go:177]   To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.015209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-334000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-334000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.47675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-334000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-334000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-334000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.562292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-334000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-334000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-334000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-334000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.908166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr: exit status 7 (29.693292ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-334000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:36.210944    3736 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:36.211116    3736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.211119    3736 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:36.211122    3736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.211248    3736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:36.211362    3736 out.go:352] Setting JSON to true
	I0831 15:48:36.211371    3736 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:36.211432    3736 notify.go:220] Checking for updates...
	I0831 15:48:36.211574    3736 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:36.211580    3736 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:36.211786    3736 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:36.211790    3736 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:36.211793    3736 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.226917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node stop m03: exit status 85 (47.06575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status: exit status 7 (29.870875ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (30.700875ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:36.348909    3744 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:36.349287    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.349292    3744 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:36.349294    3744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.349508    3744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:36.349653    3744 out.go:352] Setting JSON to false
	I0831 15:48:36.349663    3744 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:36.349718    3744 notify.go:220] Checking for updates...
	I0831 15:48:36.350120    3744 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:36.350129    3744 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:36.350343    3744 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:36.350347    3744 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:36.350350    3744 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.534208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.373375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:36.410460    3748 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:36.410699    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.410705    3748 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:36.410708    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.410842    3748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:36.411072    3748 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:36.411274    3748 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:36.414749    3748 out.go:201] 
	W0831 15:48:36.417766    3748 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0831 15:48:36.417774    3748 out.go:270] * 
	* 
	W0831 15:48:36.419349    3748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:48:36.422777    3748 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0831 15:48:36.410460    3748 out.go:345] Setting OutFile to fd 1 ...
I0831 15:48:36.410699    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:48:36.410705    3748 out.go:358] Setting ErrFile to fd 2...
I0831 15:48:36.410708    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:48:36.410842    3748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:48:36.411072    3748 mustload.go:65] Loading cluster: multinode-334000
I0831 15:48:36.411274    3748 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:48:36.414749    3748 out.go:201] 
W0831 15:48:36.417766    3748 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0831 15:48:36.417774    3748 out.go:270] * 
* 
W0831 15:48:36.419349    3748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0831 15:48:36.422777    3748 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (29.774417ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:36.454765    3750 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:36.454911    3750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.454915    3750 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:36.454917    3750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:36.455067    3750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:36.455176    3750 out.go:352] Setting JSON to false
	I0831 15:48:36.455187    3750 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:36.455240    3750 notify.go:220] Checking for updates...
	I0831 15:48:36.455376    3750 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:36.455383    3750 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:36.455593    3750 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:36.455597    3750 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:36.455599    3750 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (74.667792ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:37.791700    3754 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:37.791918    3754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:37.791923    3754 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:37.791927    3754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:37.792119    3754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:37.792296    3754 out.go:352] Setting JSON to false
	I0831 15:48:37.792310    3754 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:37.792356    3754 notify.go:220] Checking for updates...
	I0831 15:48:37.792606    3754 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:37.792620    3754 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:37.792942    3754 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:37.792947    3754 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:37.792950    3754 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (73.396583ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:38.879830    3756 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:38.880043    3756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:38.880047    3756 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:38.880050    3756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:38.880204    3756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:38.880350    3756 out.go:352] Setting JSON to false
	I0831 15:48:38.880364    3756 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:38.880406    3756 notify.go:220] Checking for updates...
	I0831 15:48:38.880622    3756 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:38.880631    3756 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:38.880908    3756 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:38.880913    3756 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:38.880916    3756 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (75.299583ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:40.485708    3758 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:40.485930    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:40.485938    3758 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:40.485941    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:40.486136    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:40.486304    3758 out.go:352] Setting JSON to false
	I0831 15:48:40.486319    3758 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:40.486370    3758 notify.go:220] Checking for updates...
	I0831 15:48:40.486607    3758 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:40.486616    3758 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:40.486884    3758 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:40.486889    3758 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:40.486891    3758 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (73.614625ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:45.543255    3765 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:45.543433    3765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:45.543442    3765 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:45.543445    3765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:45.543623    3765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:45.543788    3765 out.go:352] Setting JSON to false
	I0831 15:48:45.543804    3765 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:45.543850    3765 notify.go:220] Checking for updates...
	I0831 15:48:45.544057    3765 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:45.544068    3765 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:45.544350    3765 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:45.544355    3765 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:45.544358    3765 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (77.071417ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:51.700787    3769 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:51.701249    3769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:51.701256    3769 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:51.701259    3769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:51.701518    3769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:51.701711    3769 out.go:352] Setting JSON to false
	I0831 15:48:51.701725    3769 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:51.701813    3769 notify.go:220] Checking for updates...
	I0831 15:48:51.702301    3769 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:51.702321    3769 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:51.702604    3769 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:51.702610    3769 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:51.702613    3769 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (73.26175ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:48:57.600904    3773 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:48:57.601094    3773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:57.601099    3773 out.go:358] Setting ErrFile to fd 2...
	I0831 15:48:57.601101    3773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:48:57.601275    3773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:48:57.601437    3773 out.go:352] Setting JSON to false
	I0831 15:48:57.601452    3773 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:48:57.601490    3773 notify.go:220] Checking for updates...
	I0831 15:48:57.601706    3773 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:48:57.601715    3773 status.go:255] checking status of multinode-334000 ...
	I0831 15:48:57.602022    3773 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:48:57.602027    3773 status.go:343] host is not running, skipping remaining checks
	I0831 15:48:57.602030    3773 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (71.401292ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:06.753168    3778 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:06.753379    3778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:06.753384    3778 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:06.753387    3778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:06.753539    3778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:06.753692    3778 out.go:352] Setting JSON to false
	I0831 15:49:06.753710    3778 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:49:06.753761    3778 notify.go:220] Checking for updates...
	I0831 15:49:06.753962    3778 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:06.753970    3778 status.go:255] checking status of multinode-334000 ...
	I0831 15:49:06.754251    3778 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:49:06.754257    3778 status.go:343] host is not running, skipping remaining checks
	I0831 15:49:06.754259    3778 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (72.499208ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:24.186333    3788 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:24.186534    3788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:24.186539    3788 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:24.186542    3788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:24.186707    3788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:24.186857    3788 out.go:352] Setting JSON to false
	I0831 15:49:24.186870    3788 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:49:24.186908    3788 notify.go:220] Checking for updates...
	I0831 15:49:24.187157    3788 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:24.187166    3788 status.go:255] checking status of multinode-334000 ...
	I0831 15:49:24.187445    3788 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:49:24.187450    3788 status.go:343] host is not running, skipping remaining checks
	I0831 15:49:24.187453    3788 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (33.342334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-334000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-334000: (3.460290459s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220365417s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:27.773424    3814 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:27.773602    3814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:27.773607    3814 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:27.773610    3814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:27.773789    3814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:27.774952    3814 out.go:352] Setting JSON to false
	I0831 15:49:27.794329    3814 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2931,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:49:27.794402    3814 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:49:27.799051    3814 out.go:177] * [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:49:27.806928    3814 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:49:27.806989    3814 notify.go:220] Checking for updates...
	I0831 15:49:27.812899    3814 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:49:27.815903    3814 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:49:27.817231    3814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:49:27.819885    3814 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:49:27.822919    3814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:49:27.826283    3814 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:27.826335    3814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:49:27.830857    3814 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:49:27.837872    3814 start.go:297] selected driver: qemu2
	I0831 15:49:27.837878    3814 start.go:901] validating driver "qemu2" against &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:49:27.837933    3814 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:49:27.840348    3814 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:49:27.840397    3814 cni.go:84] Creating CNI manager for ""
	I0831 15:49:27.840403    3814 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 15:49:27.840455    3814 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:49:27.844252    3814 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:49:27.851923    3814 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0831 15:49:27.855879    3814 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:49:27.855897    3814 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:49:27.855906    3814 cache.go:56] Caching tarball of preloaded images
	I0831 15:49:27.855973    3814 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:49:27.855980    3814 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:49:27.856046    3814 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/multinode-334000/config.json ...
	I0831 15:49:27.856522    3814 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:49:27.856557    3814 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "multinode-334000"
	I0831 15:49:27.856566    3814 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:49:27.856573    3814 fix.go:54] fixHost starting: 
	I0831 15:49:27.856690    3814 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0831 15:49:27.856698    3814 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:49:27.860922    3814 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0831 15:49:27.871890    3814 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:49:27.871955    3814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9e:82:74:59:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:49:27.874002    3814 main.go:141] libmachine: STDOUT: 
	I0831 15:49:27.874025    3814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:49:27.874054    3814 fix.go:56] duration metric: took 17.4815ms for fixHost
	I0831 15:49:27.874059    3814 start.go:83] releasing machines lock for "multinode-334000", held for 17.498042ms
	W0831 15:49:27.874065    3814 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:49:27.874107    3814 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:49:27.874112    3814 start.go:729] Will try again in 5 seconds ...
	I0831 15:49:32.876238    3814 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:49:32.876629    3814 start.go:364] duration metric: took 290.458µs to acquireMachinesLock for "multinode-334000"
	I0831 15:49:32.876777    3814 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:49:32.876794    3814 fix.go:54] fixHost starting: 
	I0831 15:49:32.877461    3814 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0831 15:49:32.877486    3814 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:49:32.881751    3814 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0831 15:49:32.889876    3814 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:49:32.890082    3814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9e:82:74:59:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:49:32.899000    3814 main.go:141] libmachine: STDOUT: 
	I0831 15:49:32.899065    3814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:49:32.899127    3814 fix.go:56] duration metric: took 22.330084ms for fixHost
	I0831 15:49:32.899148    3814 start.go:83] releasing machines lock for "multinode-334000", held for 22.493209ms
	W0831 15:49:32.899329    3814 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:49:32.906843    3814 out.go:201] 
	W0831 15:49:32.910953    3814 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:49:32.910976    3814 out.go:270] * 
	* 
	W0831 15:49:32.913676    3814 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:49:32.919831    3814 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-334000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (32.856292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node delete m03: exit status 83 (39.706833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (30.586208ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:33.103774    3828 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:33.103915    3828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:33.103918    3828 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:33.103921    3828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:33.104069    3828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:33.104179    3828 out.go:352] Setting JSON to false
	I0831 15:49:33.104189    3828 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:49:33.104247    3828 notify.go:220] Checking for updates...
	I0831 15:49:33.104395    3828 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:33.104402    3828 status.go:255] checking status of multinode-334000 ...
	I0831 15:49:33.104615    3828 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:49:33.104618    3828 status.go:343] host is not running, skipping remaining checks
	I0831 15:49:33.104621    3828 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.5785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-334000 stop: (3.729270208s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status: exit status 7 (69.289583ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (33.010917ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:36.966547    3854 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:36.966730    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:36.966733    3854 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:36.966735    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:36.966859    3854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:36.966971    3854 out.go:352] Setting JSON to false
	I0831 15:49:36.966981    3854 mustload.go:65] Loading cluster: multinode-334000
	I0831 15:49:36.967048    3854 notify.go:220] Checking for updates...
	I0831 15:49:36.967171    3854 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:36.967182    3854 status.go:255] checking status of multinode-334000 ...
	I0831 15:49:36.967394    3854 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0831 15:49:36.967398    3854 status.go:343] host is not running, skipping remaining checks
	I0831 15:49:36.967400    3854 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.28525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178157708s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:49:37.026440    3858 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:49:37.026565    3858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:37.026568    3858 out.go:358] Setting ErrFile to fd 2...
	I0831 15:49:37.026570    3858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:49:37.026698    3858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:49:37.027637    3858 out.go:352] Setting JSON to false
	I0831 15:49:37.043959    3858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2941,"bootTime":1725141636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:49:37.044022    3858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:49:37.048015    3858 out.go:177] * [multinode-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:49:37.054972    3858 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:49:37.055012    3858 notify.go:220] Checking for updates...
	I0831 15:49:37.061929    3858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:49:37.064987    3858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:49:37.067905    3858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:49:37.070951    3858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:49:37.073961    3858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:49:37.075599    3858 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:49:37.075863    3858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:49:37.079902    3858 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:49:37.086786    3858 start.go:297] selected driver: qemu2
	I0831 15:49:37.086795    3858 start.go:901] validating driver "qemu2" against &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:49:37.086853    3858 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:49:37.089076    3858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:49:37.089132    3858 cni.go:84] Creating CNI manager for ""
	I0831 15:49:37.089137    3858 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0831 15:49:37.089174    3858 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-334000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:49:37.092506    3858 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:49:37.100991    3858 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0831 15:49:37.104922    3858 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:49:37.104939    3858 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:49:37.104952    3858 cache.go:56] Caching tarball of preloaded images
	I0831 15:49:37.105010    3858 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:49:37.105016    3858 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:49:37.105077    3858 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/multinode-334000/config.json ...
	I0831 15:49:37.105543    3858 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:49:37.105570    3858 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "multinode-334000"
	I0831 15:49:37.105578    3858 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:49:37.105583    3858 fix.go:54] fixHost starting: 
	I0831 15:49:37.105696    3858 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0831 15:49:37.105704    3858 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:49:37.112898    3858 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0831 15:49:37.116992    3858 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:49:37.117035    3858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9e:82:74:59:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:49:37.118907    3858 main.go:141] libmachine: STDOUT: 
	I0831 15:49:37.118926    3858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:49:37.118955    3858 fix.go:56] duration metric: took 13.372292ms for fixHost
	I0831 15:49:37.118959    3858 start.go:83] releasing machines lock for "multinode-334000", held for 13.385208ms
	W0831 15:49:37.118966    3858 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:49:37.119003    3858 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:49:37.119007    3858 start.go:729] Will try again in 5 seconds ...
	I0831 15:49:42.120995    3858 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:49:42.121320    3858 start.go:364] duration metric: took 236.583µs to acquireMachinesLock for "multinode-334000"
	I0831 15:49:42.121445    3858 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:49:42.121462    3858 fix.go:54] fixHost starting: 
	I0831 15:49:42.122103    3858 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0831 15:49:42.122128    3858 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:49:42.126589    3858 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0831 15:49:42.133475    3858 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:49:42.133633    3858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9e:82:74:59:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/multinode-334000/disk.qcow2
	I0831 15:49:42.142295    3858 main.go:141] libmachine: STDOUT: 
	I0831 15:49:42.142377    3858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:49:42.142461    3858 fix.go:56] duration metric: took 20.99475ms for fixHost
	I0831 15:49:42.142478    3858 start.go:83] releasing machines lock for "multinode-334000", held for 21.134459ms
	W0831 15:49:42.142760    3858 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:49:42.149534    3858 out.go:201] 
	W0831 15:49:42.153565    3858 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:49:42.153593    3858 out.go:270] * 
	* 
	W0831 15:49:42.156443    3858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:49:42.164505    3858 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (71.622084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000-m01 --driver=qemu2 : exit status 80 (10.184338375s)

                                                
                                                
-- stdout --
	* [multinode-334000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000-m01" primary control-plane node in "multinode-334000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 : exit status 80 (10.240886083s)

                                                
                                                
-- stdout --
	* [multinode-334000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000-m02" primary control-plane node in "multinode-334000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-334000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-334000: exit status 83 (79.068666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-334000-m02
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (30.423958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.65s)

                                                
                                    
x
+
TestPreload (9.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-659000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-659000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.827812542s)

                                                
                                                
-- stdout --
	* [test-preload-659000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-659000" primary control-plane node in "test-preload-659000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-659000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:50:03.036341    3920 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:50:03.036468    3920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:03.036471    3920 out.go:358] Setting ErrFile to fd 2...
	I0831 15:50:03.036473    3920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:50:03.036597    3920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:50:03.037617    3920 out.go:352] Setting JSON to false
	I0831 15:50:03.053802    3920 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2967,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:50:03.053867    3920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:50:03.060740    3920 out.go:177] * [test-preload-659000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:50:03.066433    3920 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:50:03.066474    3920 notify.go:220] Checking for updates...
	I0831 15:50:03.073706    3920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:50:03.075308    3920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:50:03.078661    3920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:50:03.081730    3920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:50:03.084747    3920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:50:03.087989    3920 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:50:03.088046    3920 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:50:03.092719    3920 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:50:03.099715    3920 start.go:297] selected driver: qemu2
	I0831 15:50:03.099723    3920 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:50:03.099731    3920 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:50:03.101878    3920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:50:03.104706    3920 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:50:03.107810    3920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 15:50:03.107839    3920 cni.go:84] Creating CNI manager for ""
	I0831 15:50:03.107849    3920 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:50:03.107855    3920 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:50:03.107889    3920 start.go:340] cluster config:
	{Name:test-preload-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:50:03.111632    3920 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.118720    3920 out.go:177] * Starting "test-preload-659000" primary control-plane node in "test-preload-659000" cluster
	I0831 15:50:03.121661    3920 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0831 15:50:03.121734    3920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/test-preload-659000/config.json ...
	I0831 15:50:03.121740    3920 cache.go:107] acquiring lock: {Name:mkffc3fab8d400aa84fca4df6908a184259b156b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121744    3920 cache.go:107] acquiring lock: {Name:mkfc1e7c75a166fa58acad4eff3515f7385523bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121747    3920 cache.go:107] acquiring lock: {Name:mk786e28caa2de71ea218df4350c5cab02490b95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121753    3920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/test-preload-659000/config.json: {Name:mk232541a5180e4512b25c126f0e6bcfa1887f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:50:03.121768    3920 cache.go:107] acquiring lock: {Name:mk63c639effe46d448ec47e1c93e6fe5250b4baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121775    3920 cache.go:107] acquiring lock: {Name:mk849674ec23c5cbe4c68ad03d5a2e3abd97f342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121778    3920 cache.go:107] acquiring lock: {Name:mk81e023e4e3df9cc6548ecfed28fafbe3a2fcfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121796    3920 cache.go:107] acquiring lock: {Name:mkdc05c43447dfdb7379d7386b67c077392b635c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.121986    3920 cache.go:107] acquiring lock: {Name:mk65e66a97e4fee3c4e9b636ff69969317908746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:50:03.122199    3920 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0831 15:50:03.122200    3920 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:50:03.122254    3920 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:50:03.122265    3920 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0831 15:50:03.122286    3920 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0831 15:50:03.122324    3920 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0831 15:50:03.122323    3920 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0831 15:50:03.122369    3920 start.go:360] acquireMachinesLock for test-preload-659000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:03.122465    3920 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:50:03.122496    3920 start.go:364] duration metric: took 109.625µs to acquireMachinesLock for "test-preload-659000"
	I0831 15:50:03.122515    3920 start.go:93] Provisioning new machine with config: &{Name:test-preload-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:03.122561    3920 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:03.130669    3920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:50:03.133811    3920 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:50:03.134544    3920 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0831 15:50:03.134566    3920 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0831 15:50:03.134647    3920 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:50:03.136341    3920 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0831 15:50:03.136409    3920 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0831 15:50:03.136493    3920 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:50:03.136508    3920 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0831 15:50:03.149051    3920 start.go:159] libmachine.API.Create for "test-preload-659000" (driver="qemu2")
	I0831 15:50:03.149078    3920 client.go:168] LocalClient.Create starting
	I0831 15:50:03.149164    3920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:03.149197    3920 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:03.149220    3920 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:03.149261    3920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:03.149290    3920 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:03.149300    3920 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:03.149683    3920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:03.309508    3920 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:03.368245    3920 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:03.368277    3920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:03.368519    3920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:03.378536    3920 main.go:141] libmachine: STDOUT: 
	I0831 15:50:03.378559    3920 main.go:141] libmachine: STDERR: 
	I0831 15:50:03.378628    3920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2 +20000M
	I0831 15:50:03.387535    3920 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:03.387555    3920 main.go:141] libmachine: STDERR: 
	I0831 15:50:03.387578    3920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:03.387584    3920 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:03.387594    3920 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:03.387625    3920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:4f:13:fe:c1:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:03.390160    3920 main.go:141] libmachine: STDOUT: 
	I0831 15:50:03.390178    3920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:03.390198    3920 client.go:171] duration metric: took 241.122417ms to LocalClient.Create
	I0831 15:50:03.697770    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0831 15:50:03.730896    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0831 15:50:03.750312    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0831 15:50:03.771052    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0831 15:50:03.783620    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0831 15:50:03.789752    3920 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0831 15:50:03.789798    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0831 15:50:03.829750    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0831 15:50:03.871884    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0831 15:50:03.871953    3920 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 750.166625ms
	I0831 15:50:03.871986    3920 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0831 15:50:04.272075    3920 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0831 15:50:04.272154    3920 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0831 15:50:04.730542    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0831 15:50:04.730606    3920 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.608881167s
	I0831 15:50:04.730637    3920 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0831 15:50:05.390414    3920 start.go:128] duration metric: took 2.267881792s to createHost
	I0831 15:50:05.390468    3920 start.go:83] releasing machines lock for "test-preload-659000", held for 2.268016625s
	W0831 15:50:05.390539    3920 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:05.406739    3920 out.go:177] * Deleting "test-preload-659000" in qemu2 ...
	W0831 15:50:05.447900    3920 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:05.447932    3920 start.go:729] Will try again in 5 seconds ...
	I0831 15:50:05.818108    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0831 15:50:05.818155    3920 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.696436583s
	I0831 15:50:05.818182    3920 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0831 15:50:06.744358    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0831 15:50:06.744428    3920 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.622739042s
	I0831 15:50:06.744457    3920 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0831 15:50:08.183321    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0831 15:50:08.183406    3920 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.061787458s
	I0831 15:50:08.183432    3920 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0831 15:50:08.716187    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0831 15:50:08.716237    3920 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.594607s
	I0831 15:50:08.716262    3920 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0831 15:50:09.511922    3920 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0831 15:50:09.511968    3920 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.390381333s
	I0831 15:50:09.512029    3920 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0831 15:50:10.447980    3920 start.go:360] acquireMachinesLock for test-preload-659000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:50:10.448447    3920 start.go:364] duration metric: took 395.375µs to acquireMachinesLock for "test-preload-659000"
	I0831 15:50:10.448574    3920 start.go:93] Provisioning new machine with config: &{Name:test-preload-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:50:10.448847    3920 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:50:10.454586    3920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:50:10.504969    3920 start.go:159] libmachine.API.Create for "test-preload-659000" (driver="qemu2")
	I0831 15:50:10.505034    3920 client.go:168] LocalClient.Create starting
	I0831 15:50:10.505151    3920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:50:10.505219    3920 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:10.505239    3920 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:10.505296    3920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:50:10.505340    3920 main.go:141] libmachine: Decoding PEM data...
	I0831 15:50:10.505354    3920 main.go:141] libmachine: Parsing certificate...
	I0831 15:50:10.505846    3920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:50:10.675990    3920 main.go:141] libmachine: Creating SSH key...
	I0831 15:50:10.771081    3920 main.go:141] libmachine: Creating Disk image...
	I0831 15:50:10.771087    3920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:50:10.771486    3920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:10.781627    3920 main.go:141] libmachine: STDOUT: 
	I0831 15:50:10.781647    3920 main.go:141] libmachine: STDERR: 
	I0831 15:50:10.781708    3920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2 +20000M
	I0831 15:50:10.789889    3920 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:50:10.789906    3920 main.go:141] libmachine: STDERR: 
	I0831 15:50:10.789918    3920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:10.789921    3920 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:50:10.789937    3920 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:50:10.789975    3920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d6:f7:29:82:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/test-preload-659000/disk.qcow2
	I0831 15:50:10.791764    3920 main.go:141] libmachine: STDOUT: 
	I0831 15:50:10.791779    3920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:50:10.791793    3920 client.go:171] duration metric: took 286.760125ms to LocalClient.Create
	I0831 15:50:12.792085    3920 start.go:128] duration metric: took 2.343223s to createHost
	I0831 15:50:12.792158    3920 start.go:83] releasing machines lock for "test-preload-659000", held for 2.343739292s
	W0831 15:50:12.792446    3920 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:50:12.801978    3920 out.go:201] 
	W0831 15:50:12.806062    3920 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:50:12.806115    3920 out.go:270] * 
	* 
	W0831 15:50:12.808795    3920 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:50:12.816921    3920 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-659000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-31 15:50:12.837752 -0700 PDT m=+2719.778578418
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-659000 -n test-preload-659000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-659000 -n test-preload-659000: exit status 7 (63.350292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "test-preload-659000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "test-preload-659000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-659000
--- FAIL: TestPreload (9.98s)

                                                
                                    
x
+
TestScheduledStopUnix (10.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-196000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-196000 --memory=2048 --driver=qemu2 : exit status 80 (9.94684775s)

                                                
                                                
-- stdout --
	* [scheduled-stop-196000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-196000" primary control-plane node in "scheduled-stop-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-196000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-196000" primary control-plane node in "scheduled-stop-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-31 15:50:22.930592 -0700 PDT m=+2729.871671418
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-196000 -n scheduled-stop-196000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-196000 -n scheduled-stop-196000: exit status 7 (69.826583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-196000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-196000
--- FAIL: TestScheduledStopUnix (10.10s)

                                                
                                    
x
+
TestSkaffold (12.18s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2743076510 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2743076510 version: (1.063524917s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-388000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-388000 --memory=2600 --driver=qemu2 : exit status 80 (9.879405459s)

                                                
                                                
-- stdout --
	* [skaffold-388000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-388000" primary control-plane node in "skaffold-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-388000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-388000" primary control-plane node in "skaffold-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-31 15:50:35.106871 -0700 PDT m=+2742.048254793
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-388000 -n skaffold-388000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-388000 -n skaffold-388000: exit status 7 (64.904708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "skaffold-388000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "skaffold-388000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-388000
--- FAIL: TestSkaffold (12.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (585.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1401754554 start -p running-upgrade-137000 --memory=2200 --vm-driver=qemu2 
E0831 15:52:07.217703    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1401754554 start -p running-upgrade-137000 --memory=2200 --vm-driver=qemu2 : (49.331496625s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-137000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0831 15:52:32.412125    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-137000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.720977375s)

                                                
                                                
-- stdout --
	* [running-upgrade-137000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-137000" primary control-plane node in "running-upgrade-137000" cluster
	* Updating the running qemu2 "running-upgrade-137000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:52:08.679400    4326 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:52:08.679549    4326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:52:08.679553    4326 out.go:358] Setting ErrFile to fd 2...
	I0831 15:52:08.679556    4326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:52:08.679684    4326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:52:08.681092    4326 out.go:352] Setting JSON to false
	I0831 15:52:08.697920    4326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3092,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:52:08.698018    4326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:52:08.701219    4326 out.go:177] * [running-upgrade-137000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:52:08.708323    4326 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:52:08.708376    4326 notify.go:220] Checking for updates...
	I0831 15:52:08.716259    4326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:52:08.720277    4326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:52:08.723284    4326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:52:08.726276    4326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:52:08.729271    4326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:52:08.732469    4326 config.go:182] Loaded profile config "running-upgrade-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:52:08.736220    4326 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 15:52:08.739273    4326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:52:08.742232    4326 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:52:08.751215    4326 start.go:297] selected driver: qemu2
	I0831 15:52:08.751221    4326 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-137000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:52:08.751267    4326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:52:08.753508    4326 cni.go:84] Creating CNI manager for ""
	I0831 15:52:08.753529    4326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:52:08.753552    4326 start.go:340] cluster config:
	{Name:running-upgrade-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-137000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:52:08.753605    4326 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:52:08.759318    4326 out.go:177] * Starting "running-upgrade-137000" primary control-plane node in "running-upgrade-137000" cluster
	I0831 15:52:08.767238    4326 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:52:08.767258    4326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0831 15:52:08.767269    4326 cache.go:56] Caching tarball of preloaded images
	I0831 15:52:08.767328    4326 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:52:08.767333    4326 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0831 15:52:08.767387    4326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/config.json ...
	I0831 15:52:08.767723    4326 start.go:360] acquireMachinesLock for running-upgrade-137000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:52:08.767750    4326 start.go:364] duration metric: took 20.959µs to acquireMachinesLock for "running-upgrade-137000"
	I0831 15:52:08.767758    4326 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:52:08.767766    4326 fix.go:54] fixHost starting: 
	I0831 15:52:08.768376    4326 fix.go:112] recreateIfNeeded on running-upgrade-137000: state=Running err=<nil>
	W0831 15:52:08.768391    4326 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:52:08.776230    4326 out.go:177] * Updating the running qemu2 "running-upgrade-137000" VM ...
	I0831 15:52:08.780259    4326 machine.go:93] provisionDockerMachine start ...
	I0831 15:52:08.780305    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:08.780416    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:08.780420    4326 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 15:52:08.832256    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-137000
	
	I0831 15:52:08.832269    4326 buildroot.go:166] provisioning hostname "running-upgrade-137000"
	I0831 15:52:08.832317    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:08.832439    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:08.832444    4326 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-137000 && echo "running-upgrade-137000" | sudo tee /etc/hostname
	I0831 15:52:08.884212    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-137000
	
	I0831 15:52:08.884260    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:08.884368    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:08.884379    4326 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-137000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-137000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-137000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 15:52:08.931939    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:52:08.931955    4326 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18943-972/.minikube CaCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18943-972/.minikube}
	I0831 15:52:08.931961    4326 buildroot.go:174] setting up certificates
	I0831 15:52:08.931965    4326 provision.go:84] configureAuth start
	I0831 15:52:08.931969    4326 provision.go:143] copyHostCerts
	I0831 15:52:08.932036    4326 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem, removing ...
	I0831 15:52:08.932043    4326 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem
	I0831 15:52:08.932171    4326 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem (1082 bytes)
	I0831 15:52:08.932350    4326 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem, removing ...
	I0831 15:52:08.932354    4326 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem
	I0831 15:52:08.932406    4326 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem (1123 bytes)
	I0831 15:52:08.932508    4326 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem, removing ...
	I0831 15:52:08.932512    4326 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem
	I0831 15:52:08.932551    4326 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem (1679 bytes)
	I0831 15:52:08.932650    4326 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-137000 san=[127.0.0.1 localhost minikube running-upgrade-137000]
	I0831 15:52:09.154366    4326 provision.go:177] copyRemoteCerts
	I0831 15:52:09.154420    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 15:52:09.154430    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:52:09.182339    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 15:52:09.189014    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0831 15:52:09.196333    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 15:52:09.203797    4326 provision.go:87] duration metric: took 271.825791ms to configureAuth
	I0831 15:52:09.203807    4326 buildroot.go:189] setting minikube options for container-runtime
	I0831 15:52:09.203921    4326 config.go:182] Loaded profile config "running-upgrade-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:52:09.203961    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:09.204056    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:09.204060    4326 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 15:52:09.254844    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0831 15:52:09.254854    4326 buildroot.go:70] root file system type: tmpfs
	I0831 15:52:09.254907    4326 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 15:52:09.254947    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:09.255056    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:09.255089    4326 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 15:52:09.306728    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 15:52:09.306781    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:09.306896    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:09.306905    4326 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 15:52:09.357060    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:52:09.357070    4326 machine.go:96] duration metric: took 576.8195ms to provisionDockerMachine
	I0831 15:52:09.357076    4326 start.go:293] postStartSetup for "running-upgrade-137000" (driver="qemu2")
	I0831 15:52:09.357082    4326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 15:52:09.357136    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 15:52:09.357150    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:52:09.384065    4326 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 15:52:09.385502    4326 info.go:137] Remote host: Buildroot 2021.02.12
	I0831 15:52:09.385511    4326 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/addons for local assets ...
	I0831 15:52:09.385577    4326 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/files for local assets ...
	I0831 15:52:09.385668    4326 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem -> 14322.pem in /etc/ssl/certs
	I0831 15:52:09.385770    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 15:52:09.388293    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:52:09.396228    4326 start.go:296] duration metric: took 39.144542ms for postStartSetup
	I0831 15:52:09.396246    4326 fix.go:56] duration metric: took 628.499542ms for fixHost
	I0831 15:52:09.396291    4326 main.go:141] libmachine: Using SSH client type: native
	I0831 15:52:09.396415    4326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046dc5a0] 0x1046dee00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0831 15:52:09.396419    4326 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 15:52:09.449048    4326 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725144729.738794012
	
	I0831 15:52:09.449057    4326 fix.go:216] guest clock: 1725144729.738794012
	I0831 15:52:09.449061    4326 fix.go:229] Guest: 2024-08-31 15:52:09.738794012 -0700 PDT Remote: 2024-08-31 15:52:09.396247 -0700 PDT m=+0.737628376 (delta=342.547012ms)
	I0831 15:52:09.449072    4326 fix.go:200] guest clock delta is within tolerance: 342.547012ms
	I0831 15:52:09.449074    4326 start.go:83] releasing machines lock for "running-upgrade-137000", held for 681.33675ms
	I0831 15:52:09.449135    4326 ssh_runner.go:195] Run: cat /version.json
	I0831 15:52:09.449139    4326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 15:52:09.449144    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:52:09.449159    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	W0831 15:52:09.449683    4326 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50373->127.0.0.1:50261: read: connection reset by peer
	I0831 15:52:09.449695    4326 retry.go:31] will retry after 143.087989ms: ssh: handshake failed: read tcp 127.0.0.1:50373->127.0.0.1:50261: read: connection reset by peer
	W0831 15:52:09.618424    4326 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0831 15:52:09.618499    4326 ssh_runner.go:195] Run: systemctl --version
	I0831 15:52:09.620332    4326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 15:52:09.622036    4326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 15:52:09.622062    4326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0831 15:52:09.624988    4326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0831 15:52:09.629806    4326 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 15:52:09.629815    4326 start.go:495] detecting cgroup driver to use...
	I0831 15:52:09.629873    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:52:09.635203    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0831 15:52:09.638089    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 15:52:09.641125    4326 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 15:52:09.641149    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 15:52:09.644356    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:52:09.647565    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 15:52:09.650505    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:52:09.654445    4326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 15:52:09.657391    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 15:52:09.660210    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 15:52:09.663619    4326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 15:52:09.666847    4326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 15:52:09.669656    4326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 15:52:09.672489    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:09.747460    4326 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 15:52:09.754912    4326 start.go:495] detecting cgroup driver to use...
	I0831 15:52:09.754985    4326 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 15:52:09.763408    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:52:09.768362    4326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 15:52:09.775559    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:52:09.780761    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:52:09.785561    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:52:09.790914    4326 ssh_runner.go:195] Run: which cri-dockerd
	I0831 15:52:09.792225    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 15:52:09.794971    4326 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0831 15:52:09.800149    4326 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 15:52:09.875993    4326 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 15:52:09.954311    4326 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 15:52:09.954380    4326 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 15:52:09.962735    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:10.039643    4326 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:52:13.282369    4326 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.242790667s)
	I0831 15:52:13.282438    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 15:52:13.287758    4326 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0831 15:52:13.293954    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:52:13.298555    4326 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 15:52:13.386628    4326 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 15:52:13.449654    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:13.512749    4326 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 15:52:13.519389    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:52:13.523947    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:13.589421    4326 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 15:52:13.629002    4326 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 15:52:13.629091    4326 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 15:52:13.631505    4326 start.go:563] Will wait 60s for crictl version
	I0831 15:52:13.631557    4326 ssh_runner.go:195] Run: which crictl
	I0831 15:52:13.632858    4326 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 15:52:13.645321    4326 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0831 15:52:13.645390    4326 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:52:13.658602    4326 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:52:13.677403    4326 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0831 15:52:13.677470    4326 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0831 15:52:13.678945    4326 kubeadm.go:883] updating cluster {Name:running-upgrade-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-137000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0831 15:52:13.678987    4326 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:52:13.679035    4326 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:52:13.689112    4326 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:52:13.689120    4326 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:52:13.689160    4326 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:52:13.692638    4326 ssh_runner.go:195] Run: which lz4
	I0831 15:52:13.693825    4326 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 15:52:13.695048    4326 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 15:52:13.695058    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0831 15:52:14.642403    4326 docker.go:649] duration metric: took 948.633458ms to copy over tarball
	I0831 15:52:14.642465    4326 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 15:52:15.770790    4326 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1283395s)
	I0831 15:52:15.770805    4326 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 15:52:15.786845    4326 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:52:15.789845    4326 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0831 15:52:15.795490    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:15.864575    4326 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:52:17.054987    4326 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.190422792s)
	I0831 15:52:17.055073    4326 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:52:17.071097    4326 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:52:17.071107    4326 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:52:17.071112    4326 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0831 15:52:17.075321    4326 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:52:17.077268    4326 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:52:17.078839    4326 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:52:17.078895    4326 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:52:17.080772    4326 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:52:17.081660    4326 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:52:17.083035    4326 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:52:17.083042    4326 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:52:17.083935    4326 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:52:17.084014    4326 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:52:17.085112    4326 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0831 15:52:17.086190    4326 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:52:17.086227    4326 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:52:17.087248    4326 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:52:17.087243    4326 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0831 15:52:17.088531    4326 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:52:17.463447    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:52:17.479615    4326 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0831 15:52:17.479641    4326 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:52:17.479689    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:52:17.489774    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0831 15:52:17.501916    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:52:17.511745    4326 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0831 15:52:17.511767    4326 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:52:17.511814    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:52:17.518059    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:52:17.521704    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0831 15:52:17.522931    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:52:17.531454    4326 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0831 15:52:17.531485    4326 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:52:17.531532    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:52:17.540454    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0831 15:52:17.545822    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0831 15:52:17.545986    4326 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0831 15:52:17.546002    4326 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:52:17.546038    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:52:17.557811    4326 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0831 15:52:17.557830    4326 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:52:17.557877    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0831 15:52:17.559182    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0831 15:52:17.567901    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0831 15:52:17.575688    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0831 15:52:17.585336    4326 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0831 15:52:17.585356    4326 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0831 15:52:17.585403    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0831 15:52:17.595289    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0831 15:52:17.595400    4326 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0831 15:52:17.597092    4326 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0831 15:52:17.597104    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0831 15:52:17.597237    4326 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0831 15:52:17.597335    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:52:17.605291    4326 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0831 15:52:17.605304    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0831 15:52:17.609529    4326 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0831 15:52:17.609554    4326 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:52:17.609598    4326 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:52:17.641724    4326 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0831 15:52:17.641773    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0831 15:52:17.641889    4326 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:52:17.643336    4326 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0831 15:52:17.643347    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0831 15:52:17.682971    4326 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:52:17.682988    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0831 15:52:17.719643    4326 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0831 15:52:17.863890    4326 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0831 15:52:17.864108    4326 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:52:17.885364    4326 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0831 15:52:17.885401    4326 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:52:17.885477    4326 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:52:18.708049    4326 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0831 15:52:18.708351    4326 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:52:18.713180    4326 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0831 15:52:18.713211    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0831 15:52:18.767167    4326 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:52:18.767182    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0831 15:52:19.013567    4326 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0831 15:52:19.013604    4326 cache_images.go:92] duration metric: took 1.942533833s to LoadCachedImages
	W0831 15:52:19.013644    4326 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0831 15:52:19.013652    4326 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0831 15:52:19.013702    4326 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-137000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-137000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 15:52:19.013761    4326 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 15:52:19.027096    4326 cni.go:84] Creating CNI manager for ""
	I0831 15:52:19.027108    4326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:52:19.027113    4326 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 15:52:19.027122    4326 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-137000 NodeName:running-upgrade-137000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 15:52:19.027188    4326 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-137000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 15:52:19.027247    4326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0831 15:52:19.030138    4326 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 15:52:19.030173    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 15:52:19.032843    4326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0831 15:52:19.038221    4326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 15:52:19.043113    4326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0831 15:52:19.049317    4326 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0831 15:52:19.050632    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:52:19.119509    4326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:52:19.124397    4326 certs.go:68] Setting up /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000 for IP: 10.0.2.15
	I0831 15:52:19.124404    4326 certs.go:194] generating shared ca certs ...
	I0831 15:52:19.124413    4326 certs.go:226] acquiring lock for ca certs: {Name:mk9afb7f1a72cfb2bc15164a899be0d76a69591d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:52:19.124566    4326 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key
	I0831 15:52:19.124600    4326 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key
	I0831 15:52:19.124606    4326 certs.go:256] generating profile certs ...
	I0831 15:52:19.124670    4326 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.key
	I0831 15:52:19.124688    4326 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key.b8fdeafe
	I0831 15:52:19.124698    4326 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt.b8fdeafe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0831 15:52:19.257394    4326 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt.b8fdeafe ...
	I0831 15:52:19.257413    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt.b8fdeafe: {Name:mka983925fc041dc36ad0165450ad236e1fa410e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:52:19.257762    4326 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key.b8fdeafe ...
	I0831 15:52:19.257772    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key.b8fdeafe: {Name:mk79308506d30e14677e725f342de4f059c27570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:52:19.257916    4326 certs.go:381] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt.b8fdeafe -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt
	I0831 15:52:19.258181    4326 certs.go:385] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key.b8fdeafe -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key
	I0831 15:52:19.258388    4326 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/proxy-client.key
	I0831 15:52:19.258527    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem (1338 bytes)
	W0831 15:52:19.258555    4326 certs.go:480] ignoring /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432_empty.pem, impossibly tiny 0 bytes
	I0831 15:52:19.258560    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 15:52:19.258580    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem (1082 bytes)
	I0831 15:52:19.258601    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem (1123 bytes)
	I0831 15:52:19.258618    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem (1679 bytes)
	I0831 15:52:19.258661    4326 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:52:19.259023    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 15:52:19.280476    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 15:52:19.307918    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 15:52:19.315162    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 15:52:19.324656    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 15:52:19.338766    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 15:52:19.348515    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 15:52:19.356607    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 15:52:19.368024    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /usr/share/ca-certificates/14322.pem (1708 bytes)
	I0831 15:52:19.381752    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 15:52:19.405831    4326 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem --> /usr/share/ca-certificates/1432.pem (1338 bytes)
	I0831 15:52:19.428950    4326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 15:52:19.444566    4326 ssh_runner.go:195] Run: openssl version
	I0831 15:52:19.446938    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 15:52:19.458231    4326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:52:19.472272    4326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:52:19.472332    4326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:52:19.480039    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 15:52:19.497829    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1432.pem && ln -fs /usr/share/ca-certificates/1432.pem /etc/ssl/certs/1432.pem"
	I0831 15:52:19.509568    4326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1432.pem
	I0831 15:52:19.513051    4326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:20 /usr/share/ca-certificates/1432.pem
	I0831 15:52:19.513085    4326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1432.pem
	I0831 15:52:19.518858    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1432.pem /etc/ssl/certs/51391683.0"
	I0831 15:52:19.525283    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14322.pem && ln -fs /usr/share/ca-certificates/14322.pem /etc/ssl/certs/14322.pem"
	I0831 15:52:19.535346    4326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14322.pem
	I0831 15:52:19.540518    4326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:20 /usr/share/ca-certificates/14322.pem
	I0831 15:52:19.540568    4326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14322.pem
	I0831 15:52:19.542517    4326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14322.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 15:52:19.545312    4326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 15:52:19.546822    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 15:52:19.548644    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 15:52:19.551884    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 15:52:19.554116    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 15:52:19.556478    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 15:52:19.559416    4326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 15:52:19.561503    4326 kubeadm.go:392] StartCluster: {Name:running-upgrade-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-137000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:52:19.561579    4326 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:52:19.577098    4326 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 15:52:19.582720    4326 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 15:52:19.582731    4326 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 15:52:19.582777    4326 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 15:52:19.588898    4326 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:52:19.589146    4326 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-137000" does not appear in /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:52:19.589194    4326 kubeconfig.go:62] /Users/jenkins/minikube-integration/18943-972/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-137000" cluster setting kubeconfig missing "running-upgrade-137000" context setting]
	I0831 15:52:19.589334    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:52:19.589718    4326 kapi.go:59] client config for running-upgrade-137000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c98030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 15:52:19.590042    4326 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 15:52:19.596174    4326 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-137000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0831 15:52:19.596182    4326 kubeadm.go:1160] stopping kube-system containers ...
	I0831 15:52:19.596243    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:52:19.613123    4326 docker.go:483] Stopping containers: [383c3cd78d19 7529c05adaa5 e91f0f212132 683b650261f9 6ade8d915431 54a9a612ff3b 93f811406083 50ac2ea948e7 8286b33c1919 803a82350bfc e2ac9d4d3baa 8391036b8e2d d3d1621e3d4d 14d4c10074cc 1c93be87690b 20c1f94160ea 7ef49e8c14e5 40fcd8a27029 d048405b6fd7]
	I0831 15:52:19.613192    4326 ssh_runner.go:195] Run: docker stop 383c3cd78d19 7529c05adaa5 e91f0f212132 683b650261f9 6ade8d915431 54a9a612ff3b 93f811406083 50ac2ea948e7 8286b33c1919 803a82350bfc e2ac9d4d3baa 8391036b8e2d d3d1621e3d4d 14d4c10074cc 1c93be87690b 20c1f94160ea 7ef49e8c14e5 40fcd8a27029 d048405b6fd7
	I0831 15:52:19.794832    4326 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0831 15:52:19.874758    4326 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:52:19.878155    4326 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 31 22:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 31 22:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 31 22:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 31 22:51 /etc/kubernetes/scheduler.conf
	
	I0831 15:52:19.878180    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0831 15:52:19.881071    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:52:19.881101    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:52:19.883948    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0831 15:52:19.886723    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:52:19.886748    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:52:19.889491    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0831 15:52:19.892364    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:52:19.892390    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:52:19.895221    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0831 15:52:19.897912    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:52:19.897940    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:52:19.900662    4326 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:52:19.903656    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:52:19.936531    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:52:20.596040    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:52:20.774636    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:52:20.801703    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:52:20.823852    4326 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:52:20.823921    4326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:52:20.828822    4326 api_server.go:72] duration metric: took 4.971875ms to wait for apiserver process to appear ...
	I0831 15:52:20.828833    4326 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:52:20.828842    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:25.830842    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:25.830873    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:30.831180    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:30.831248    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:35.832584    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:35.832701    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:40.833835    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:40.833876    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:45.835233    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:45.835313    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:50.837511    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:50.837595    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:52:55.840097    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:52:55.840174    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:00.840958    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:00.841043    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:05.843684    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:05.843766    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:10.844495    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:10.844576    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:15.845890    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:15.845963    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:20.848345    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:20.848721    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:53:20.866250    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:53:20.866337    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:53:20.879272    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:53:20.879329    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:53:20.890418    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:53:20.890479    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:53:20.902281    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:53:20.902362    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:53:20.913147    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:53:20.913222    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:53:20.923516    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:53:20.923582    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:53:20.933957    4326 logs.go:276] 0 containers: []
	W0831 15:53:20.933970    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:53:20.934024    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:53:20.944789    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:53:20.944815    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:53:20.944821    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:53:21.019816    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:53:21.019829    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:53:21.033688    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:53:21.033697    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:53:21.048710    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:53:21.048724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:53:21.060827    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:53:21.060841    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:53:21.087156    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:53:21.087167    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:53:21.122002    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:53:21.122013    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:53:21.133269    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:53:21.133281    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:53:21.144980    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:53:21.144992    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:53:21.149921    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:53:21.149931    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:53:21.175587    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:53:21.175597    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:53:21.186905    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:53:21.186919    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:53:21.199642    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:53:21.199655    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:53:21.221016    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:53:21.221029    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:53:21.234921    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:53:21.234935    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:53:21.246737    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:53:21.246750    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:53:21.263683    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:53:21.263697    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:53:23.783496    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:28.785701    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:28.786101    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:53:28.834244    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:53:28.834372    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:53:28.854212    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:53:28.854288    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:53:28.868909    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:53:28.868975    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:53:28.881193    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:53:28.881258    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:53:28.892608    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:53:28.892679    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:53:28.903266    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:53:28.903320    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:53:28.913358    4326 logs.go:276] 0 containers: []
	W0831 15:53:28.913366    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:53:28.913413    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:53:28.923971    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:53:28.923990    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:53:28.923996    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:53:28.961135    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:53:28.961144    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:53:28.975303    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:53:28.975315    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:53:28.992764    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:53:28.992777    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:53:29.008434    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:53:29.008447    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:53:29.019284    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:53:29.019296    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:53:29.046567    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:53:29.046580    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:53:29.085906    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:53:29.085920    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:53:29.106544    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:53:29.106554    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:53:29.117836    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:53:29.117846    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:53:29.129642    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:53:29.129651    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:53:29.141511    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:53:29.141525    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:53:29.155515    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:53:29.155526    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:53:29.159966    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:53:29.159986    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:53:29.175513    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:53:29.175527    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:53:29.187517    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:53:29.187528    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:53:29.198746    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:53:29.198760    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:53:31.711991    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:36.714431    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:36.714890    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:53:36.753330    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:53:36.753464    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:53:36.775441    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:53:36.775554    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:53:36.790759    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:53:36.790828    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:53:36.803508    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:53:36.803578    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:53:36.814455    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:53:36.814529    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:53:36.825283    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:53:36.825351    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:53:36.835912    4326 logs.go:276] 0 containers: []
	W0831 15:53:36.835922    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:53:36.835974    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:53:36.846806    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:53:36.846832    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:53:36.846838    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:53:36.883465    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:53:36.883473    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:53:36.888011    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:53:36.888019    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:53:36.907902    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:53:36.907915    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:53:36.921831    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:53:36.921843    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:53:36.933931    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:53:36.933946    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:53:36.946137    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:53:36.946148    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:53:36.958320    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:53:36.958330    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:53:36.994538    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:53:36.994551    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:53:37.011961    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:53:37.011975    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:53:37.023424    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:53:37.023435    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:53:37.050697    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:53:37.050707    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:53:37.070942    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:53:37.070956    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:53:37.094643    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:53:37.094655    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:53:37.106681    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:53:37.106691    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:53:37.119003    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:53:37.119015    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:53:37.131120    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:53:37.131130    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:53:39.641921    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:44.640408    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:44.640859    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:53:44.682485    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:53:44.682625    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:53:44.704021    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:53:44.704124    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:53:44.719064    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:53:44.719136    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:53:44.731325    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:53:44.731398    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:53:44.743621    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:53:44.743683    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:53:44.754402    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:53:44.754462    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:53:44.764632    4326 logs.go:276] 0 containers: []
	W0831 15:53:44.764645    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:53:44.764698    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:53:44.774839    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:53:44.774855    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:53:44.774861    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:53:44.808783    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:53:44.808790    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:53:44.813363    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:53:44.813372    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:53:44.825123    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:53:44.825135    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:53:44.836276    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:53:44.836286    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:53:44.862998    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:53:44.863007    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:53:44.877192    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:53:44.877201    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:53:44.895429    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:53:44.895439    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:53:44.912102    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:53:44.912114    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:53:44.925516    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:53:44.925525    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:53:44.961230    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:53:44.961245    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:53:44.975403    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:53:44.975414    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:53:44.989599    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:53:44.989610    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:53:45.001413    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:53:45.001423    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:53:45.015263    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:53:45.015273    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:53:45.035285    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:53:45.035298    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:53:45.046445    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:53:45.046457    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:53:47.558003    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:53:52.558053    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:53:52.558511    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:53:52.590672    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:53:52.590808    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:53:52.611693    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:53:52.611798    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:53:52.632191    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:53:52.632275    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:53:52.649178    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:53:52.649243    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:53:52.663637    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:53:52.663703    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:53:52.673673    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:53:52.673736    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:53:52.683909    4326 logs.go:276] 0 containers: []
	W0831 15:53:52.683920    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:53:52.683970    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:53:52.694411    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:53:52.694430    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:53:52.694436    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:53:52.728327    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:53:52.728335    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:53:52.732853    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:53:52.732860    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:53:52.766264    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:53:52.766276    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:53:52.788016    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:53:52.788026    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:53:52.799387    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:53:52.799398    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:53:52.816225    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:53:52.816235    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:53:52.830941    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:53:52.830955    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:53:52.844641    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:53:52.844652    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:53:52.858913    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:53:52.858924    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:53:52.869890    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:53:52.869902    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:53:52.881317    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:53:52.881327    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:53:52.907338    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:53:52.907347    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:53:52.921336    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:53:52.921345    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:53:52.933400    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:53:52.933412    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:53:52.943956    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:53:52.943969    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:53:52.955277    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:53:52.955289    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:53:55.468192    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:00.469412    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:00.469834    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:00.507654    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:00.507787    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:00.527566    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:00.527669    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:00.542289    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:00.542354    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:00.554264    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:00.554333    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:00.565240    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:00.565307    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:00.575506    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:00.575573    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:00.585793    4326 logs.go:276] 0 containers: []
	W0831 15:54:00.585802    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:00.585852    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:00.596671    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:00.596689    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:00.596694    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:00.601413    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:00.601423    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:00.620561    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:00.620573    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:00.634723    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:00.634733    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:00.645791    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:00.645802    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:00.661479    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:00.661491    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:00.675153    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:00.675163    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:00.686937    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:00.686948    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:00.698008    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:00.698022    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:00.709991    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:00.710004    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:00.721832    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:00.721844    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:00.746274    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:00.746282    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:00.783899    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:00.783908    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:00.821298    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:00.821309    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:00.839082    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:00.839094    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:00.853099    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:00.853110    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:00.864808    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:00.864819    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:03.378243    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:08.379627    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:08.379783    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:08.395531    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:08.395611    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:08.408076    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:08.408143    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:08.418738    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:08.418803    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:08.436734    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:08.436812    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:08.447097    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:08.447160    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:08.457803    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:08.457867    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:08.468448    4326 logs.go:276] 0 containers: []
	W0831 15:54:08.468460    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:08.468515    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:08.478890    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:08.478907    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:08.478913    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:08.490125    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:08.490138    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:08.501667    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:08.501679    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:08.513765    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:08.513778    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:08.525325    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:08.525339    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:08.536683    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:08.536694    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:08.561281    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:08.561291    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:08.574829    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:08.574840    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:08.595781    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:08.595795    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:08.609816    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:08.609826    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:08.623808    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:08.623818    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:08.634776    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:08.634791    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:08.669704    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:08.669714    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:08.673854    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:08.673863    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:08.692767    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:08.692776    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:08.705267    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:08.705278    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:08.740258    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:08.740272    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:11.253601    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:16.255005    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:16.255461    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:16.300011    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:16.300131    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:16.323236    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:16.323325    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:16.337217    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:16.337287    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:16.349076    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:16.349150    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:16.359423    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:16.359483    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:16.369963    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:16.370027    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:16.380016    4326 logs.go:276] 0 containers: []
	W0831 15:54:16.380027    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:16.380076    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:16.390449    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:16.390466    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:16.390472    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:16.406444    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:16.406457    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:16.420773    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:16.420785    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:16.438348    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:16.438359    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:16.449799    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:16.449809    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:16.467254    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:16.467265    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:16.471727    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:16.471734    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:16.492520    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:16.492531    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:16.508749    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:16.508763    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:16.520665    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:16.520679    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:16.557989    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:16.558001    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:16.572363    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:16.572377    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:16.585351    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:16.585364    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:16.619384    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:16.619393    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:16.632598    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:16.632611    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:16.646647    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:16.646658    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:16.657859    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:16.657869    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:19.186257    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:24.188308    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:24.188643    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:24.224106    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:24.224219    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:24.241764    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:24.241832    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:24.255915    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:24.255973    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:24.267673    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:24.267729    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:24.278148    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:24.278218    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:24.288555    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:24.288625    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:24.298402    4326 logs.go:276] 0 containers: []
	W0831 15:54:24.298414    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:24.298470    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:24.309680    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:24.309700    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:24.309705    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:24.321243    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:24.321256    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:24.332244    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:24.332254    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:24.368610    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:24.368619    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:24.372668    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:24.372676    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:24.384736    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:24.384749    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:24.396044    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:24.396058    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:24.413030    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:24.413041    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:24.432802    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:24.432813    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:24.444966    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:24.444976    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:24.480544    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:24.480559    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:24.494385    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:24.494398    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:24.516690    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:24.516704    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:24.528927    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:24.528940    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:24.554519    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:24.554526    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:24.569835    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:24.569847    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:24.581620    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:24.581631    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:27.094577    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:32.097084    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:32.097474    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:32.136936    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:32.137051    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:32.157854    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:32.157942    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:32.172853    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:32.172913    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:32.185260    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:32.185331    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:32.195927    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:32.195985    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:32.207132    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:32.207203    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:32.222786    4326 logs.go:276] 0 containers: []
	W0831 15:54:32.222802    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:32.222856    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:32.233878    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:32.233894    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:32.233899    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:32.239000    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:32.239005    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:32.253855    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:32.253868    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:32.265581    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:32.265593    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:32.280947    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:32.280959    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:32.299006    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:32.299017    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:32.322642    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:32.322650    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:32.334260    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:32.334274    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:32.345384    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:32.345395    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:32.357293    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:32.357307    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:32.384298    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:32.384308    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:32.398003    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:32.398014    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:32.412248    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:32.412259    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:32.424743    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:32.424755    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:32.458591    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:32.458598    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:32.493494    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:32.493508    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:32.506499    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:32.506511    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:35.019741    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:40.022792    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:40.023165    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:40.068799    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:40.068913    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:40.089989    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:40.090091    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:40.111762    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:40.111820    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:40.126063    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:40.126133    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:40.136388    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:40.136451    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:40.146805    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:40.146875    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:40.157278    4326 logs.go:276] 0 containers: []
	W0831 15:54:40.157293    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:40.157350    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:40.167871    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:40.167889    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:40.167895    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:40.201564    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:40.201578    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:40.216121    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:40.216131    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:40.233788    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:40.233800    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:40.253415    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:40.253427    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:40.263996    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:40.264006    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:40.288512    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:40.288519    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:40.300336    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:40.300347    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:40.304739    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:40.304747    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:40.315764    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:40.315779    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:40.327853    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:40.327862    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:40.342116    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:40.342128    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:40.363477    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:40.363491    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:40.379240    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:40.379251    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:40.391814    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:40.391823    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:40.403012    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:40.403027    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:40.414419    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:40.414429    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:42.950939    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:47.953465    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:47.953679    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:47.965679    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:47.965755    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:47.981137    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:47.981210    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:47.991914    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:47.991977    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:48.002500    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:48.002571    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:48.013461    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:48.013524    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:48.024724    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:48.024781    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:48.034918    4326 logs.go:276] 0 containers: []
	W0831 15:54:48.034932    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:48.034985    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:48.045767    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:48.045787    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:48.045793    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:48.082548    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:48.082558    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:48.094555    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:48.094566    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:48.106563    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:48.106572    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:48.145291    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:48.145303    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:48.160417    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:48.160428    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:48.172816    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:48.172827    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:48.190922    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:48.190934    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:48.203068    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:48.203081    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:48.217070    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:48.217080    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:48.228983    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:48.228995    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:48.240636    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:48.240649    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:48.266423    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:48.266430    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:48.271046    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:48.271053    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:48.292910    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:48.292921    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:48.335867    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:48.335877    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:48.348873    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:48.348885    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:50.862025    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:54:55.864136    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:54:55.864551    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:54:55.905170    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:54:55.905312    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:54:55.927088    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:54:55.927180    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:54:55.942034    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:54:55.942114    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:54:55.954892    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:54:55.954963    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:54:55.965699    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:54:55.965771    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:54:55.976725    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:54:55.976789    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:54:55.986725    4326 logs.go:276] 0 containers: []
	W0831 15:54:55.986738    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:54:55.986793    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:54:55.997517    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:54:55.997533    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:54:55.997541    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:54:56.032551    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:54:56.032566    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:54:56.044325    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:54:56.044338    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:54:56.068908    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:54:56.068920    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:54:56.081108    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:54:56.081124    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:54:56.115792    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:54:56.115803    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:54:56.136094    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:54:56.136103    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:54:56.148706    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:54:56.148717    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:54:56.159437    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:54:56.159449    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:54:56.176828    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:54:56.176840    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:54:56.181431    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:54:56.181441    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:54:56.194682    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:54:56.194693    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:54:56.213487    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:54:56.213497    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:54:56.231800    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:54:56.231812    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:54:56.246009    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:54:56.246018    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:54:56.257276    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:54:56.257287    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:54:56.270259    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:54:56.270271    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:54:58.781513    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:03.783779    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:03.784198    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:03.824820    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:03.824961    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:03.846148    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:03.846260    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:03.863239    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:03.863315    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:03.875621    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:03.875694    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:03.885953    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:03.886024    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:03.896346    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:03.896419    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:03.906649    4326 logs.go:276] 0 containers: []
	W0831 15:55:03.906662    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:03.906719    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:03.917492    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:03.917509    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:03.917515    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:03.935355    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:03.935368    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:03.949655    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:03.949669    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:03.984692    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:03.984703    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:04.005525    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:04.005537    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:04.019754    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:04.019765    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:04.037399    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:04.037412    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:04.042460    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:04.042467    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:04.058929    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:04.058939    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:04.073063    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:04.073073    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:04.084138    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:04.084148    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:04.119028    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:04.119037    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:04.131364    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:04.131377    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:04.143701    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:04.143725    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:04.159966    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:04.159977    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:04.172736    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:04.172748    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:04.200244    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:04.200267    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:06.714765    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:11.716915    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:11.717105    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:11.739321    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:11.739424    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:11.754088    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:11.754173    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:11.767038    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:11.767115    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:11.778165    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:11.778226    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:11.788653    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:11.788709    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:11.799489    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:11.799554    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:11.809957    4326 logs.go:276] 0 containers: []
	W0831 15:55:11.809969    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:11.810016    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:11.820628    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:11.820643    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:11.820648    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:11.831819    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:11.831831    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:11.855201    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:11.855211    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:11.890932    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:11.890945    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:11.909089    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:11.909105    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:11.925410    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:11.925423    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:11.930328    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:11.930337    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:11.944191    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:11.944202    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:11.955775    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:11.955786    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:11.991131    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:11.991143    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:12.005752    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:12.005764    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:12.018167    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:12.018180    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:12.029128    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:12.029140    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:12.046622    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:12.046656    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:12.059542    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:12.059554    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:12.070991    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:12.071002    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:12.082460    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:12.082473    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:14.605467    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:19.608096    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:19.608520    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:19.647959    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:19.648081    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:19.669768    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:19.669886    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:19.684562    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:19.684629    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:19.698697    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:19.698770    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:19.709408    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:19.709479    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:19.720134    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:19.720207    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:19.730468    4326 logs.go:276] 0 containers: []
	W0831 15:55:19.730479    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:19.730530    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:19.741071    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:19.741087    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:19.741093    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:19.755938    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:19.755950    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:19.794410    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:19.794421    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:19.807358    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:19.807370    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:19.819535    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:19.819550    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:19.837641    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:19.837656    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:19.873780    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:19.873789    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:19.885158    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:19.885171    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:19.896428    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:19.896438    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:19.921888    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:19.921896    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:19.935689    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:19.935701    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:19.955671    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:19.955684    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:19.970267    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:19.970277    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:19.981621    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:19.981631    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:19.994508    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:19.994523    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:20.012422    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:20.012432    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:20.024018    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:20.024029    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:22.529788    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:27.532286    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:27.532413    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:27.551435    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:27.551510    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:27.562512    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:27.562582    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:27.573075    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:27.573145    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:27.584205    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:27.584275    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:27.595395    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:27.595464    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:27.606245    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:27.606321    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:27.616611    4326 logs.go:276] 0 containers: []
	W0831 15:55:27.616630    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:27.616691    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:27.626985    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:27.627004    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:27.627009    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:27.648325    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:27.648336    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:27.663605    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:27.663616    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:27.674654    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:27.674670    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:27.690365    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:27.690375    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:27.694937    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:27.694944    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:27.708619    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:27.708629    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:27.719989    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:27.720005    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:27.731616    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:27.731631    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:27.743204    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:27.743218    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:27.766190    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:27.766201    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:27.778749    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:27.778760    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:27.813303    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:27.813313    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:27.849266    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:27.849280    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:27.866674    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:27.866688    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:27.878236    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:27.878252    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:27.892445    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:27.892457    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:30.404893    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:35.407232    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:35.407691    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:35.447042    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:35.447179    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:35.469649    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:35.469767    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:35.485598    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:35.485691    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:35.497565    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:35.497636    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:35.508109    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:35.508173    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:35.518256    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:35.518314    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:35.528914    4326 logs.go:276] 0 containers: []
	W0831 15:55:35.528930    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:35.528986    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:35.539212    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:35.539230    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:35.539236    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:35.559907    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:35.559918    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:35.574279    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:35.574293    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:35.585807    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:35.585821    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:35.590202    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:35.590209    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:35.602047    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:35.602060    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:35.613112    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:35.613128    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:35.649924    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:35.649934    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:35.664369    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:35.664382    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:35.675539    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:35.675551    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:35.700482    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:35.700493    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:35.711720    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:35.711732    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:35.734851    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:35.734859    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:35.746230    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:35.746243    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:35.782890    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:35.782904    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:35.796679    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:35.796692    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:35.808020    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:35.808033    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:38.321518    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:43.324116    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:43.324514    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:43.362145    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:43.362279    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:43.383598    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:43.383690    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:43.406459    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:43.406534    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:43.418601    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:43.418682    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:43.429086    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:43.429158    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:43.440709    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:43.440778    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:43.451523    4326 logs.go:276] 0 containers: []
	W0831 15:55:43.451534    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:43.451582    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:43.463418    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:43.463438    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:43.463445    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:43.488004    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:43.488022    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:43.505394    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:43.505407    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:43.517899    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:43.517913    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:43.530013    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:43.530027    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:43.542013    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:43.542027    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:43.546138    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:43.546144    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:43.563266    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:43.563277    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:43.581540    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:43.581550    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:43.595544    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:43.595560    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:43.612774    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:43.612785    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:43.624755    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:43.624779    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:43.664519    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:43.664533    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:43.700842    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:43.700855    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:43.726322    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:43.726336    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:43.738150    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:43.738163    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:43.749699    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:43.749709    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:46.263432    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:51.265784    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:51.265892    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:51.277133    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:51.277217    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:51.287851    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:51.287929    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:51.298339    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:51.298412    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:51.309176    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:51.309239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:51.319592    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:51.319651    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:51.332131    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:51.332200    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:51.342768    4326 logs.go:276] 0 containers: []
	W0831 15:55:51.342779    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:51.342833    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:51.353464    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:51.353522    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:51.353537    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:51.391118    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:51.391133    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:51.395625    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:51.395631    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:51.431303    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:51.431316    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:51.445494    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:51.445506    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:51.457503    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:51.457514    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:51.470174    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:51.470186    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:51.481555    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:51.481566    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:51.505146    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:51.505157    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:51.520027    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:51.520040    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:51.534582    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:51.534592    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:51.546644    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:51.546657    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:51.558486    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:51.558497    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:51.576952    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:51.576966    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:51.604529    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:51.604540    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:51.621395    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:51.621406    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:51.633214    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:51.633225    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:54.147234    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:59.149412    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:59.149532    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:59.160537    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:59.160606    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:59.171137    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:59.171211    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:59.182291    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:59.182357    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:59.192487    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:59.192549    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:59.202869    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:59.202928    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:59.212987    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:59.213053    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:59.223896    4326 logs.go:276] 0 containers: []
	W0831 15:55:59.223907    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:59.223961    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:59.237574    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:59.237589    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:59.237595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:59.250867    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:59.250880    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:59.262129    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:59.262142    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:59.298577    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:59.298590    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:59.314191    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:59.314204    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:59.325765    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:59.325775    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:59.348076    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:59.348083    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:59.382990    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:59.383001    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:59.393518    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:59.393531    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:59.416762    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:59.416771    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:59.434739    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:59.434749    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:59.446132    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:59.446143    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:59.450506    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:59.450513    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:59.464121    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:59.464131    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:59.475485    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:59.475496    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:59.487557    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:59.487570    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:59.501476    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:59.501489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:02.020045    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:07.022685    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:07.023008    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:56:07.060388    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:56:07.060517    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:56:07.078299    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:56:07.078388    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:56:07.092029    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:56:07.092099    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:56:07.104130    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:56:07.104193    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:56:07.114944    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:56:07.114999    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:56:07.125999    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:56:07.126074    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:56:07.137016    4326 logs.go:276] 0 containers: []
	W0831 15:56:07.137029    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:56:07.137090    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:56:07.150733    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:56:07.150752    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:56:07.150758    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:56:07.169568    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:56:07.169585    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:56:07.185158    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:56:07.185169    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:07.201582    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:56:07.201596    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:56:07.214522    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:56:07.214535    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:56:07.233118    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:56:07.233140    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:56:07.246468    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:56:07.246480    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:56:07.270245    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:56:07.270255    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:56:07.305578    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:56:07.305598    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:56:07.318239    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:56:07.318250    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:56:07.338267    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:56:07.338281    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:56:07.352421    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:56:07.352435    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:56:07.389364    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:56:07.389372    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:56:07.401179    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:56:07.401192    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:56:07.405700    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:56:07.405706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:56:07.417678    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:56:07.417690    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:56:07.429534    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:56:07.429544    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:56:09.951417    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:14.953471    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:14.953572    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:56:14.964677    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:56:14.964751    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:56:14.976071    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:56:14.976150    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:56:14.987326    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:56:14.987401    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:56:14.998536    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:56:14.998606    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:56:15.012354    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:56:15.012422    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:56:15.023422    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:56:15.023490    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:56:15.033866    4326 logs.go:276] 0 containers: []
	W0831 15:56:15.033876    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:56:15.033933    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:56:15.045094    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:56:15.045111    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:56:15.045117    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:15.063159    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:56:15.063169    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:56:15.075255    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:56:15.075265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:56:15.090873    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:56:15.090884    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:56:15.103024    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:56:15.103038    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:56:15.125228    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:56:15.125238    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:56:15.129308    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:56:15.129314    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:56:15.166275    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:56:15.166291    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:56:15.196477    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:56:15.196487    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:56:15.207770    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:56:15.207782    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:56:15.223786    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:56:15.223796    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:56:15.241297    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:56:15.241307    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:56:15.255036    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:56:15.255045    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:56:15.267335    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:56:15.267344    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:56:15.304137    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:56:15.304146    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:56:15.317914    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:56:15.317922    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:56:15.331579    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:56:15.331595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:56:17.844731    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:22.846785    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:22.846858    4326 kubeadm.go:597] duration metric: took 4m3.28939825s to restartPrimaryControlPlane
	W0831 15:56:22.846928    4326 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0831 15:56:22.846950    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0831 15:56:23.852002    4326 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005069084s)
	I0831 15:56:23.852086    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 15:56:23.857226    4326 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:56:23.860190    4326 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:56:23.863050    4326 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 15:56:23.863056    4326 kubeadm.go:157] found existing configuration files:
	
	I0831 15:56:23.863078    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0831 15:56:23.865951    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 15:56:23.865978    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:56:23.868768    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0831 15:56:23.871669    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 15:56:23.871697    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:56:23.874807    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0831 15:56:23.877372    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 15:56:23.877399    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:56:23.880322    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0831 15:56:23.883392    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 15:56:23.883416    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:56:23.886271    4326 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 15:56:23.900868    4326 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0831 15:56:23.900898    4326 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 15:56:23.956985    4326 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 15:56:23.957052    4326 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 15:56:23.957116    4326 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 15:56:24.006158    4326 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 15:56:24.010366    4326 out.go:235]   - Generating certificates and keys ...
	I0831 15:56:24.010398    4326 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 15:56:24.010437    4326 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 15:56:24.010476    4326 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 15:56:24.010505    4326 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 15:56:24.010543    4326 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 15:56:24.010573    4326 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 15:56:24.010607    4326 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 15:56:24.010650    4326 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 15:56:24.010690    4326 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 15:56:24.010729    4326 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 15:56:24.010745    4326 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 15:56:24.010768    4326 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 15:56:24.135818    4326 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 15:56:24.274631    4326 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 15:56:24.322654    4326 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 15:56:24.490637    4326 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 15:56:24.522298    4326 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 15:56:24.522704    4326 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 15:56:24.522753    4326 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 15:56:24.596636    4326 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 15:56:24.600970    4326 out.go:235]   - Booting up control plane ...
	I0831 15:56:24.601017    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 15:56:24.601058    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 15:56:24.601103    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 15:56:24.601149    4326 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 15:56:24.601305    4326 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 15:56:29.100077    4326 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503516 seconds
	I0831 15:56:29.100258    4326 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 15:56:29.104724    4326 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 15:56:29.614717    4326 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 15:56:29.614895    4326 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-137000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 15:56:30.118816    4326 kubeadm.go:310] [bootstrap-token] Using token: qryssf.o5pm00f14c813c2u
	I0831 15:56:30.125369    4326 out.go:235]   - Configuring RBAC rules ...
	I0831 15:56:30.125421    4326 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 15:56:30.125458    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 15:56:30.128862    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 15:56:30.129840    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 15:56:30.130692    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 15:56:30.131572    4326 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 15:56:30.135045    4326 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 15:56:30.292697    4326 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 15:56:30.523944    4326 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 15:56:30.524416    4326 kubeadm.go:310] 
	I0831 15:56:30.524449    4326 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 15:56:30.524452    4326 kubeadm.go:310] 
	I0831 15:56:30.524497    4326 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 15:56:30.524506    4326 kubeadm.go:310] 
	I0831 15:56:30.524529    4326 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 15:56:30.524560    4326 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 15:56:30.524586    4326 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 15:56:30.524588    4326 kubeadm.go:310] 
	I0831 15:56:30.524611    4326 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 15:56:30.524613    4326 kubeadm.go:310] 
	I0831 15:56:30.524637    4326 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 15:56:30.524676    4326 kubeadm.go:310] 
	I0831 15:56:30.524705    4326 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 15:56:30.524777    4326 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 15:56:30.524813    4326 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 15:56:30.524848    4326 kubeadm.go:310] 
	I0831 15:56:30.524891    4326 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 15:56:30.524942    4326 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 15:56:30.524947    4326 kubeadm.go:310] 
	I0831 15:56:30.525001    4326 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qryssf.o5pm00f14c813c2u \
	I0831 15:56:30.525066    4326 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 \
	I0831 15:56:30.525079    4326 kubeadm.go:310] 	--control-plane 
	I0831 15:56:30.525081    4326 kubeadm.go:310] 
	I0831 15:56:30.525161    4326 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 15:56:30.525167    4326 kubeadm.go:310] 
	I0831 15:56:30.525206    4326 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qryssf.o5pm00f14c813c2u \
	I0831 15:56:30.525262    4326 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 
	I0831 15:56:30.525320    4326 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 15:56:30.525329    4326 cni.go:84] Creating CNI manager for ""
	I0831 15:56:30.525336    4326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:56:30.529848    4326 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 15:56:30.536799    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 15:56:30.539816    4326 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 15:56:30.544436    4326 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 15:56:30.544477    4326 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:56:30.544486    4326 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-137000 minikube.k8s.io/updated_at=2024_08_31T15_56_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=running-upgrade-137000 minikube.k8s.io/primary=true
	I0831 15:56:30.587463    4326 ops.go:34] apiserver oom_adj: -16
	I0831 15:56:30.587463    4326 kubeadm.go:1113] duration metric: took 43.019917ms to wait for elevateKubeSystemPrivileges
	I0831 15:56:30.587509    4326 kubeadm.go:394] duration metric: took 4m11.051510875s to StartCluster
	I0831 15:56:30.587522    4326 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:30.587616    4326 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:56:30.588010    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:30.588212    4326 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:56:30.588268    4326 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 15:56:30.588308    4326 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-137000"
	I0831 15:56:30.588321    4326 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-137000"
	W0831 15:56:30.588326    4326 addons.go:243] addon storage-provisioner should already be in state true
	I0831 15:56:30.588337    4326 host.go:66] Checking if "running-upgrade-137000" exists ...
	I0831 15:56:30.588343    4326 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-137000"
	I0831 15:56:30.588376    4326 config.go:182] Loaded profile config "running-upgrade-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:56:30.588385    4326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-137000"
	I0831 15:56:30.592726    4326 out.go:177] * Verifying Kubernetes components...
	I0831 15:56:30.593347    4326 kapi.go:59] client config for running-upgrade-137000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c98030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 15:56:30.596997    4326 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-137000"
	W0831 15:56:30.597002    4326 addons.go:243] addon default-storageclass should already be in state true
	I0831 15:56:30.597009    4326 host.go:66] Checking if "running-upgrade-137000" exists ...
	I0831 15:56:30.597498    4326 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 15:56:30.597503    4326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 15:56:30.597508    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:56:30.600789    4326 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:30.604740    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:30.608809    4326 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:56:30.608815    4326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 15:56:30.608821    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:56:30.679460    4326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:56:30.685334    4326 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:56:30.685375    4326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:30.689590    4326 api_server.go:72] duration metric: took 101.369209ms to wait for apiserver process to appear ...
	I0831 15:56:30.689597    4326 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:56:30.689603    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:30.704360    4326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 15:56:30.723696    4326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:56:31.046340    4326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 15:56:31.046353    4326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 15:56:35.691538    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:35.691598    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:40.691679    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:40.691720    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:45.691890    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:45.691980    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:50.692238    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:50.692264    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:55.692878    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:55.692926    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:00.693553    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:00.693602    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0831 15:57:01.046950    4326 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0831 15:57:01.055255    4326 out.go:177] * Enabled addons: storage-provisioner
	I0831 15:57:01.062238    4326 addons.go:510] duration metric: took 30.474875625s for enable addons: enabled=[storage-provisioner]
	I0831 15:57:05.693885    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:05.693915    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:10.694875    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:10.694911    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:15.696111    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:15.696136    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:20.697688    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:20.697747    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:25.699849    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:25.699890    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:30.702042    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:30.702141    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:30.712984    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:30.713063    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:30.723662    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:30.723730    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:30.734736    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:30.734802    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:30.744720    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:30.744787    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:30.762411    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:30.762484    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:30.779445    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:30.779526    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:30.793634    4326 logs.go:276] 0 containers: []
	W0831 15:57:30.793646    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:30.793707    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:30.807550    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:30.807565    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:30.807571    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:30.843335    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:30.843348    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:30.858001    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:30.858014    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:30.874811    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:30.874829    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:30.886644    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:30.886655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:30.899313    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:30.899325    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:30.917149    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:30.917160    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:30.928880    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:30.928894    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:30.952492    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:30.952502    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:30.957041    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:30.957051    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:30.993466    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:30.993476    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:31.008351    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:31.008365    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:31.019707    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:31.019717    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:33.536753    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:38.538933    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:38.539092    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:38.550958    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:38.551026    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:38.562931    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:38.563015    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:38.573230    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:38.573298    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:38.587195    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:38.587259    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:38.597776    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:38.597845    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:38.610845    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:38.610912    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:38.621024    4326 logs.go:276] 0 containers: []
	W0831 15:57:38.621035    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:38.621092    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:38.631495    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:38.631510    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:38.631516    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:38.643261    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:38.643275    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:38.666948    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:38.666958    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:38.671402    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:38.671409    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:38.707211    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:38.707222    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:38.721994    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:38.722009    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:38.735921    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:38.735932    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:38.747102    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:38.747116    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:38.768911    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:38.768923    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:38.802807    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:38.802820    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:38.817225    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:38.817234    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:38.832497    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:38.832508    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:38.844389    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:38.844402    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:41.357582    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:46.359695    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:46.359792    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:46.372857    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:46.372938    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:46.384105    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:46.384174    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:46.394976    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:46.395037    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:46.409526    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:46.409595    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:46.420115    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:46.420178    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:46.430730    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:46.430793    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:46.440972    4326 logs.go:276] 0 containers: []
	W0831 15:57:46.440984    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:46.441035    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:46.451075    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:46.451089    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:46.451094    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:46.484146    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:46.484158    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:46.502194    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:46.502205    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:46.515463    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:46.515474    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:46.527100    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:46.527109    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:46.539323    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:46.539337    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:46.556690    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:46.556701    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:46.580366    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:46.580375    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:46.584841    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:46.584849    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:46.620553    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:46.620565    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:46.632083    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:46.632094    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:46.652828    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:46.652840    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:46.663859    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:46.663871    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:49.177911    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:54.180310    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:54.180419    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:54.192128    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:54.192201    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:54.203169    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:54.203237    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:54.214905    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:54.214977    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:54.227811    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:54.227880    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:54.238086    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:54.238158    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:54.248820    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:54.248892    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:54.259192    4326 logs.go:276] 0 containers: []
	W0831 15:57:54.259206    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:54.259259    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:54.269466    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:54.269482    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:54.269489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:54.285325    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:54.285337    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:54.308425    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:54.308436    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:54.340278    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:54.340288    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:54.377777    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:54.377789    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:54.391647    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:54.391660    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:54.403028    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:54.403042    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:54.419211    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:54.419221    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:54.436331    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:54.436345    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:54.440864    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:54.440871    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:54.454816    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:54.454826    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:54.466421    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:54.466430    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:54.477782    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:54.477793    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:56.991965    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:01.994029    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:01.994118    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:02.005493    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:02.005565    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:02.017463    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:02.017528    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:02.029242    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:02.029324    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:02.040202    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:02.040273    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:02.052747    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:02.052823    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:02.064753    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:02.064825    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:02.076198    4326 logs.go:276] 0 containers: []
	W0831 15:58:02.076209    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:02.076266    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:02.087987    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:02.088001    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:02.088007    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:02.101038    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:02.101049    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:02.116561    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:02.116571    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:02.129891    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:02.129901    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:02.145582    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:02.145593    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:02.178229    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:02.178240    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:02.183068    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:02.183076    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:02.218584    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:02.218594    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:02.233366    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:02.233386    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:02.257729    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:02.257742    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:02.268865    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:02.268876    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:02.287940    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:02.287950    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:02.299414    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:02.299428    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:04.822411    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:09.824522    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:09.824598    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:09.846228    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:09.846379    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:09.858661    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:09.858728    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:09.870090    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:09.870156    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:09.881453    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:09.881521    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:09.893315    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:09.893384    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:09.905047    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:09.905127    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:09.916500    4326 logs.go:276] 0 containers: []
	W0831 15:58:09.916512    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:09.916574    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:09.928108    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:09.928124    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:09.928129    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:09.947369    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:09.947380    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:09.974111    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:09.974128    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:10.010592    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:10.010616    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:10.016040    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:10.016053    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:10.045028    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:10.045039    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:10.057384    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:10.057397    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:10.072540    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:10.072554    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:10.088225    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:10.088237    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:10.099515    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:10.099527    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:10.135305    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:10.135319    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:10.149162    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:10.149174    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:10.160509    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:10.160522    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:12.672771    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:17.672984    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:17.673126    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:17.688863    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:17.688928    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:17.700626    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:17.700738    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:17.713030    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:17.713104    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:17.725165    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:17.725239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:17.736096    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:17.736166    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:17.747848    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:17.747922    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:17.759485    4326 logs.go:276] 0 containers: []
	W0831 15:58:17.759515    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:17.759576    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:17.771103    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:17.771123    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:17.771131    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:17.787213    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:17.787225    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:17.804254    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:17.804267    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:17.816597    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:17.816607    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:17.842058    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:17.842071    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:17.861118    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:17.861135    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:17.896825    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:17.896845    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:17.901956    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:17.901965    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:17.942950    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:17.942961    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:17.957814    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:17.957827    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:17.973304    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:17.973314    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:17.985771    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:17.985786    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:17.998816    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:17.998828    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:20.516953    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:25.519165    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:25.519243    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:25.535381    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:25.535462    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:25.548952    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:25.549031    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:25.565707    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:25.565782    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:25.577083    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:25.577132    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:25.588640    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:25.588696    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:25.601702    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:25.601766    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:25.614563    4326 logs.go:276] 0 containers: []
	W0831 15:58:25.614575    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:25.614637    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:25.626084    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:25.626100    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:25.626105    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:25.638325    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:25.638334    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:25.657183    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:25.657194    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:25.682864    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:25.682876    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:25.720198    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:25.720214    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:25.725368    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:25.725378    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:25.766627    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:25.766640    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:25.780238    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:25.780252    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:25.796347    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:25.796364    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:25.808447    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:25.808459    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:25.824255    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:25.824265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:25.839540    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:25.839556    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:25.852623    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:25.852637    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:28.367301    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:33.369450    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:33.369627    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:33.383795    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:33.383871    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:33.395315    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:33.395391    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:33.405764    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:33.405836    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:33.416088    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:33.416154    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:33.426222    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:33.426299    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:33.437331    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:33.437400    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:33.448765    4326 logs.go:276] 0 containers: []
	W0831 15:58:33.448776    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:33.448833    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:33.459791    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:33.459811    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:33.459817    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:33.496134    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:33.496150    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:33.535230    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:33.535242    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:33.551208    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:33.551220    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:33.564255    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:33.564279    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:33.588791    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:33.588804    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:33.600941    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:33.600954    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:33.606284    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:33.606298    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:33.622444    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:33.622454    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:33.637170    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:33.637186    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:33.650019    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:33.650029    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:33.662261    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:33.662273    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:33.682559    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:33.682572    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:36.197802    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:41.200182    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:41.200627    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:41.241199    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:41.241332    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:41.261518    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:41.261611    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:41.276671    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:41.276749    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:41.289086    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:41.289160    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:41.300115    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:41.300185    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:41.310659    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:41.310726    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:41.321399    4326 logs.go:276] 0 containers: []
	W0831 15:58:41.321411    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:41.321471    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:41.331880    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:41.331895    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:41.331900    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:41.346414    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:41.346424    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:41.361113    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:41.361130    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:41.373619    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:41.373630    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:41.399988    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:41.400006    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:41.435991    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:41.436008    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:41.441314    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:41.441324    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:41.479444    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:41.479456    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:41.492978    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:41.492989    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:41.511530    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:41.511541    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:41.525043    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:41.525056    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:41.537377    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:41.537389    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:41.550039    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:41.550051    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:44.067833    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:49.070024    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:49.070241    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:49.089619    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:49.089718    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:49.104237    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:49.104315    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:49.116405    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:58:49.116480    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:49.126994    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:49.127062    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:49.137925    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:49.137984    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:49.148136    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:49.148204    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:49.158228    4326 logs.go:276] 0 containers: []
	W0831 15:58:49.158239    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:49.158294    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:49.168847    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:49.168865    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:49.168870    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:49.181187    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:49.181198    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:49.202311    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:49.202321    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:49.228564    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:49.228574    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:49.261741    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:49.261755    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:49.276915    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:49.276928    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:49.289817    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:49.289827    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:49.308539    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:49.308551    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:49.351163    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:58:49.351173    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:58:49.364718    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:58:49.364733    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:58:49.376745    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:49.376758    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:49.389363    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:49.389380    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:49.394600    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:49.394612    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:49.409736    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:49.409749    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:49.426238    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:49.426254    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:51.941612    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:56.943824    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:56.944050    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:56.966804    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:56.966891    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:56.981870    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:56.981939    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:56.996435    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:58:56.996508    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:57.007094    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:57.007153    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:57.017000    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:57.017066    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:57.027376    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:57.027441    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:57.037820    4326 logs.go:276] 0 containers: []
	W0831 15:58:57.037830    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:57.037886    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:57.048262    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:57.048281    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:57.048286    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:57.082067    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:57.082079    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:57.086802    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:57.086812    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:57.101479    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:58:57.101489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:58:57.112748    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:57.112761    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:57.124035    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:57.124046    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:57.141796    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:57.141806    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:57.156176    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:57.156189    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:57.170971    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:57.170980    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:57.183013    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:57.183024    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:57.194631    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:57.194642    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:57.208661    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:57.208670    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:57.247002    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:57.247012    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:57.261823    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:57.261834    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:57.289001    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:58:57.289016    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:58:59.803866    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:04.806058    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:04.806227    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:04.820630    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:04.820727    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:04.833722    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:04.833807    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:04.847252    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:04.847337    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:04.860974    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:04.861061    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:04.871542    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:04.871624    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:04.882201    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:04.882278    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:04.892525    4326 logs.go:276] 0 containers: []
	W0831 15:59:04.892536    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:04.892602    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:04.902852    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:04.902872    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:04.902877    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:04.937713    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:04.937724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:04.949337    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:04.949348    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:04.963592    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:04.963601    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:04.976351    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:04.976362    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:04.987950    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:04.987959    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:05.005846    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:05.005857    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:05.017224    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:05.017238    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:05.022600    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:05.022608    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:05.057234    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:05.057247    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:05.081024    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:05.081033    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:05.095125    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:05.095140    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:05.106596    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:05.106608    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:05.124474    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:05.124483    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:05.139678    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:05.139687    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:07.652848    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:12.655041    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:12.655151    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:12.668725    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:12.668801    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:12.679673    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:12.679736    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:12.690540    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:12.690612    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:12.701992    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:12.702058    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:12.712963    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:12.713040    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:12.726705    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:12.726767    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:12.737550    4326 logs.go:276] 0 containers: []
	W0831 15:59:12.737563    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:12.737621    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:12.748002    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:12.748020    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:12.748028    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:12.759765    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:12.759778    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:12.775816    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:12.775827    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:12.788148    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:12.788158    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:12.799954    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:12.799965    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:12.832256    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:12.832271    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:12.836836    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:12.836845    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:12.854273    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:12.854286    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:12.868867    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:12.868877    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:12.880885    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:12.880899    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:12.892690    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:12.892703    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:12.917732    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:12.917743    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:12.929790    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:12.929803    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:12.941595    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:12.941607    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:12.977592    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:12.977604    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:15.495561    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:20.496807    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:20.497003    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:20.517749    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:20.517854    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:20.532259    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:20.532331    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:20.546109    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:20.546179    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:20.557240    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:20.557312    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:20.573005    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:20.573074    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:20.583373    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:20.583442    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:20.593339    4326 logs.go:276] 0 containers: []
	W0831 15:59:20.593348    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:20.593402    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:20.605873    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:20.605890    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:20.605896    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:20.617713    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:20.617724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:20.629340    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:20.629352    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:20.645716    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:20.645727    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:20.657262    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:20.657276    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:20.672536    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:20.672547    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:20.688390    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:20.688399    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:20.706262    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:20.706276    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:20.739286    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:20.739294    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:20.756322    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:20.756331    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:20.768439    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:20.768449    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:20.780274    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:20.780285    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:20.815699    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:20.815711    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:20.836984    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:20.836996    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:20.862543    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:20.862554    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:23.369188    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:28.371420    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:28.371696    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:28.397959    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:28.398075    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:28.420558    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:28.420633    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:28.433072    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:28.433148    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:28.448354    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:28.448418    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:28.463822    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:28.463889    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:28.474581    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:28.474648    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:28.484804    4326 logs.go:276] 0 containers: []
	W0831 15:59:28.484814    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:28.484868    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:28.495104    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:28.495120    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:28.495126    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:28.509583    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:28.509595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:28.521960    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:28.521970    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:28.546868    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:28.546886    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:28.559835    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:28.559850    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:28.571607    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:28.571620    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:28.594815    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:28.594828    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:28.617105    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:28.617119    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:28.642104    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:28.642117    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:28.675544    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:28.675555    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:28.710081    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:28.710092    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:28.731883    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:28.731899    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:28.748651    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:28.748663    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:28.753804    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:28.753812    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:28.768113    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:28.768126    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:31.285323    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:36.286907    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:36.287205    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:36.313058    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:36.313171    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:36.333074    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:36.333152    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:36.347984    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:36.348060    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:36.363663    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:36.363735    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:36.374694    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:36.374754    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:36.384857    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:36.384925    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:36.395199    4326 logs.go:276] 0 containers: []
	W0831 15:59:36.395208    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:36.395256    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:36.405420    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:36.405444    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:36.405449    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:36.420379    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:36.420391    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:36.432086    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:36.432097    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:36.443938    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:36.443948    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:36.476478    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:36.476490    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:36.488202    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:36.488213    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:36.503023    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:36.503032    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:36.527015    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:36.527024    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:36.531447    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:36.531455    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:36.542741    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:36.542753    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:36.554169    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:36.554181    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:36.577851    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:36.577860    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:36.614010    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:36.614021    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:36.632676    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:36.632690    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:36.644347    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:36.644359    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:39.159168    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:44.161760    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:44.161922    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:44.176887    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:44.176960    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:44.188526    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:44.188595    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:44.199110    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:44.199176    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:44.209483    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:44.209545    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:44.219867    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:44.219939    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:44.241343    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:44.241403    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:44.252277    4326 logs.go:276] 0 containers: []
	W0831 15:59:44.252288    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:44.252340    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:44.262685    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:44.262700    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:44.262706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:44.277741    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:44.277755    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:44.297712    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:44.297724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:44.312731    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:44.312744    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:44.324979    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:44.324989    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:44.358149    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:44.358162    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:44.372202    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:44.372215    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:44.384491    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:44.384500    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:44.395974    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:44.395987    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:44.407222    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:44.407232    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:44.419143    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:44.419156    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:44.444191    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:44.444199    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:44.456632    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:44.456642    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:44.461630    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:44.461636    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:44.497727    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:44.497740    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:47.018625    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:52.019344    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:52.019503    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:52.031875    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:52.031953    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:52.042166    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:52.042239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:52.052564    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:52.052636    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:52.063703    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:52.063771    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:52.074158    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:52.074225    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:52.083850    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:52.083912    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:52.094090    4326 logs.go:276] 0 containers: []
	W0831 15:59:52.094105    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:52.094168    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:52.104617    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:52.104636    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:52.104642    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:52.116640    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:52.116653    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:52.133619    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:52.133628    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:52.167477    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:52.167487    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:52.171951    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:52.171958    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:52.183639    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:52.183649    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:52.198248    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:52.198258    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:52.209642    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:52.209653    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:52.245300    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:52.245313    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:52.259758    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:52.259772    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:52.271333    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:52.271345    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:52.294593    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:52.294601    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:52.315589    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:52.315604    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:52.327019    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:52.327031    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:52.338644    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:52.338655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:54.852340    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:59.854422    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:59.854598    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:59.875751    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:59.875837    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:59.887848    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:59.887924    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:59.899788    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:59.899855    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:59.910262    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:59.910336    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:59.921447    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:59.921514    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:59.932074    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:59.932140    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:59.942318    4326 logs.go:276] 0 containers: []
	W0831 15:59:59.942329    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:59.942383    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:59.952481    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:59.952497    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:59.952503    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:59.957647    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:59.957655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:59.991779    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:59.991790    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:00.009559    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:00.009570    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:00.021454    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:00.021466    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:00.053908    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:00.053919    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:00.065777    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:00.065790    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:00.080178    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:00.080190    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:00.091393    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:00.091406    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:00.102962    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:00.102974    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:00.114417    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:00.114427    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:00.128359    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:00.128369    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:00.140208    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:00.140221    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:00.155107    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:00.155116    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:00.167137    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:00.167148    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:02.691460    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:07.693252    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:07.693366    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:07.705124    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:07.705202    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:07.719146    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:07.719214    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:07.730543    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:07.730613    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:07.740730    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:07.740804    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:07.751577    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:07.751645    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:07.762084    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:07.762156    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:07.772382    4326 logs.go:276] 0 containers: []
	W0831 16:00:07.772393    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:07.772453    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:07.786559    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:07.786577    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:07.786583    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:07.822356    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:07.822367    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:07.859324    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:07.859343    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:07.877759    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:07.877770    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:07.892994    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:07.893005    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:07.904694    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:07.904709    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:07.939592    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:07.939603    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:07.953232    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:07.953242    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:07.965058    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:07.965067    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:07.989995    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:07.990005    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:08.008163    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:08.008173    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:08.019672    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:08.019681    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:08.031254    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:08.031265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:08.048446    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:08.048459    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:08.053420    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:08.053427    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:10.565750    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:15.568172    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:15.568547    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:15.596058    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:15.596195    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:15.617376    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:15.617456    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:15.632030    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:15.632108    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:15.642934    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:15.642991    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:15.652965    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:15.653021    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:15.663681    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:15.663754    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:15.675557    4326 logs.go:276] 0 containers: []
	W0831 16:00:15.675568    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:15.675631    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:15.686134    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:15.686151    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:15.686157    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:15.701299    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:15.701310    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:15.713316    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:15.713326    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:15.727642    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:15.727663    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:15.733013    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:15.733022    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:15.744695    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:15.744706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:15.756758    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:15.756768    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:15.775841    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:15.775852    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:15.799062    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:15.799070    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:15.830718    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:15.830727    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:15.844669    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:15.844681    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:15.856369    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:15.856380    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:15.870649    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:15.870661    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:15.882483    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:15.882497    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:15.918112    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:15.918123    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:18.438178    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:23.440200    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:23.440303    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:23.456239    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:23.456314    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:23.467574    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:23.467648    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:23.479236    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:23.479320    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:23.491505    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:23.491577    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:23.502430    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:23.502500    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:23.518349    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:23.518418    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:23.530486    4326 logs.go:276] 0 containers: []
	W0831 16:00:23.530500    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:23.530561    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:23.541750    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:23.541771    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:23.541778    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:23.546530    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:23.546542    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:23.564371    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:23.564388    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:23.577806    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:23.577817    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:23.589898    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:23.589912    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:23.607011    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:23.607023    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:23.621659    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:23.621672    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:23.634898    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:23.634909    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:23.646669    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:23.646679    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:23.665302    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:23.665313    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:23.689144    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:23.689159    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:23.702978    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:23.702988    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:23.740596    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:23.740606    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:23.753803    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:23.753815    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:23.766968    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:23.766982    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:26.304167    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:31.306676    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:31.311276    4326 out.go:201] 
	W0831 16:00:31.314058    4326 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0831 16:00:31.314067    4326 out.go:270] * 
	* 
	W0831 16:00:31.314784    4326 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:00:31.330059    4326 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-137000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-31 16:00:31.419211 -0700 PDT m=+3338.395725876
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-137000 -n running-upgrade-137000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-137000 -n running-upgrade-137000: exit status 2 (15.660668542s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-137000 logs -n 25
helpers_test.go:253: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-751000          | force-systemd-flag-751000 | jenkins | v1.33.1 | 31 Aug 24 15:50 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-325000              | force-systemd-env-325000  | jenkins | v1.33.1 | 31 Aug 24 15:50 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-325000           | force-systemd-env-325000  | jenkins | v1.33.1 | 31 Aug 24 15:50 PDT | 31 Aug 24 15:50 PDT |
	| start   | -p docker-flags-502000                | docker-flags-502000       | jenkins | v1.33.1 | 31 Aug 24 15:50 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-751000             | force-systemd-flag-751000 | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-751000          | force-systemd-flag-751000 | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT | 31 Aug 24 15:51 PDT |
	| start   | -p cert-expiration-889000             | cert-expiration-889000    | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-502000 ssh               | docker-flags-502000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-502000 ssh               | docker-flags-502000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-502000                | docker-flags-502000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT | 31 Aug 24 15:51 PDT |
	| start   | -p cert-options-731000                | cert-options-731000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-731000 ssh               | cert-options-731000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-731000 -- sudo        | cert-options-731000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-731000                | cert-options-731000       | jenkins | v1.33.1 | 31 Aug 24 15:51 PDT | 31 Aug 24 15:51 PDT |
	| start   | -p running-upgrade-137000             | minikube                  | jenkins | v1.26.0 | 31 Aug 24 15:51 PDT | 31 Aug 24 15:52 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-137000             | running-upgrade-137000    | jenkins | v1.33.1 | 31 Aug 24 15:52 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-889000             | cert-expiration-889000    | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-889000             | cert-expiration-889000    | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT | 31 Aug 24 15:54 PDT |
	| start   | -p kubernetes-upgrade-584000          | kubernetes-upgrade-584000 | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-584000          | kubernetes-upgrade-584000 | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT | 31 Aug 24 15:54 PDT |
	| start   | -p kubernetes-upgrade-584000          | kubernetes-upgrade-584000 | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-584000          | kubernetes-upgrade-584000 | jenkins | v1.33.1 | 31 Aug 24 15:54 PDT | 31 Aug 24 15:54 PDT |
	| start   | -p stopped-upgrade-231000             | minikube                  | jenkins | v1.26.0 | 31 Aug 24 15:54 PDT | 31 Aug 24 15:55 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-231000 stop           | minikube                  | jenkins | v1.26.0 | 31 Aug 24 15:55 PDT | 31 Aug 24 15:55 PDT |
	| start   | -p stopped-upgrade-231000             | stopped-upgrade-231000    | jenkins | v1.33.1 | 31 Aug 24 15:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 15:55:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 15:55:44.556037    4510 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:55:44.556186    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:55:44.556192    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 15:55:44.556195    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:55:44.556346    4510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:55:44.557645    4510 out.go:352] Setting JSON to false
	I0831 15:55:44.577313    4510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3308,"bootTime":1725141636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:55:44.577384    4510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:55:44.582566    4510 out.go:177] * [stopped-upgrade-231000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:55:44.589458    4510 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:55:44.589502    4510 notify.go:220] Checking for updates...
	I0831 15:55:44.597483    4510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:55:44.600448    4510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:55:44.604506    4510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:55:44.607451    4510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:55:44.610603    4510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:55:44.613709    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:55:44.617416    4510 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 15:55:44.620498    4510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:55:44.624418    4510 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:55:44.631465    4510 start.go:297] selected driver: qemu2
	I0831 15:55:44.631472    4510 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:55:44.631531    4510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:55:44.634085    4510 cni.go:84] Creating CNI manager for ""
	I0831 15:55:44.634102    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:55:44.634129    4510 start.go:340] cluster config:
	{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:55:44.634183    4510 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:55:44.641411    4510 out.go:177] * Starting "stopped-upgrade-231000" primary control-plane node in "stopped-upgrade-231000" cluster
	I0831 15:55:44.645442    4510 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:55:44.645467    4510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0831 15:55:44.645477    4510 cache.go:56] Caching tarball of preloaded images
	I0831 15:55:44.645538    4510 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:55:44.645543    4510 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0831 15:55:44.645596    4510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0831 15:55:44.646116    4510 start.go:360] acquireMachinesLock for stopped-upgrade-231000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:55:44.646153    4510 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "stopped-upgrade-231000"
	I0831 15:55:44.646161    4510 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:55:44.646165    4510 fix.go:54] fixHost starting: 
	I0831 15:55:44.646274    4510 fix.go:112] recreateIfNeeded on stopped-upgrade-231000: state=Stopped err=<nil>
	W0831 15:55:44.646282    4510 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:55:44.653468    4510 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-231000" ...
	I0831 15:55:43.664519    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:43.664533    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:43.700842    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:43.700855    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:43.726322    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:43.726336    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:43.738150    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:43.738163    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:43.749699    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:43.749709    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:46.263432    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:44.656441    4510 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:55:44.656519    4510 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50489-:22,hostfwd=tcp::50490-:2376,hostname=stopped-upgrade-231000 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/disk.qcow2
	I0831 15:55:44.701578    4510 main.go:141] libmachine: STDOUT: 
	I0831 15:55:44.701606    4510 main.go:141] libmachine: STDERR: 
	I0831 15:55:44.701611    4510 main.go:141] libmachine: Waiting for VM to start (ssh -p 50489 docker@127.0.0.1)...
	I0831 15:55:51.265784    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:51.265892    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:51.277133    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:51.277217    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:51.287851    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:51.287929    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:51.298339    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:51.298412    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:51.309176    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:51.309239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:51.319592    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:51.319651    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:51.332131    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:51.332200    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:51.342768    4326 logs.go:276] 0 containers: []
	W0831 15:55:51.342779    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:51.342833    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:51.353464    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:51.353522    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:51.353537    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:51.391118    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:51.391133    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:51.395625    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:51.395631    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:51.431303    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:51.431316    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:51.445494    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:51.445506    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:51.457503    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:51.457514    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:51.470174    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:51.470186    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:51.481555    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:51.481566    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:51.505146    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:51.505157    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:51.520027    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:51.520040    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:55:51.534582    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:51.534592    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:51.546644    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:51.546657    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:51.558486    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:51.558497    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:51.576952    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:51.576966    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:51.604529    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:51.604540    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:51.621395    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:51.621406    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:51.633214    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:51.633225    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:54.147234    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:55:59.149412    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:55:59.149532    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:55:59.160537    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:55:59.160606    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:55:59.171137    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:55:59.171211    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:55:59.182291    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:55:59.182357    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:55:59.192487    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:55:59.192549    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:55:59.202869    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:55:59.202928    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:55:59.212987    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:55:59.213053    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:55:59.223896    4326 logs.go:276] 0 containers: []
	W0831 15:55:59.223907    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:55:59.223961    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:55:59.237574    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:55:59.237589    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:55:59.237595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:55:59.250867    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:55:59.250880    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:55:59.262129    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:55:59.262142    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:55:59.298577    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:55:59.298590    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:55:59.314191    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:55:59.314204    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:55:59.325765    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:55:59.325775    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:55:59.348076    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:55:59.348083    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:55:59.382990    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:55:59.383001    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:55:59.393518    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:55:59.393531    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:55:59.416762    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:55:59.416771    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:55:59.434739    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:55:59.434749    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:55:59.446132    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:55:59.446143    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:55:59.450506    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:55:59.450513    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:55:59.464121    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:55:59.464131    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:55:59.475485    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:55:59.475496    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:55:59.487557    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:55:59.487570    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:55:59.501476    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:55:59.501489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:02.020045    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:04.612310    4510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0831 15:56:04.612998    4510 machine.go:93] provisionDockerMachine start ...
	I0831 15:56:04.613156    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.613636    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.613650    4510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 15:56:04.703744    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0831 15:56:04.703774    4510 buildroot.go:166] provisioning hostname "stopped-upgrade-231000"
	I0831 15:56:04.703911    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.704142    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.704166    4510 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-231000 && echo "stopped-upgrade-231000" | sudo tee /etc/hostname
	I0831 15:56:04.788902    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-231000
	
	I0831 15:56:04.789000    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.789202    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.789214    4510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-231000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-231000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-231000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 15:56:04.865209    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:56:04.865225    4510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18943-972/.minikube CaCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18943-972/.minikube}
	I0831 15:56:04.865235    4510 buildroot.go:174] setting up certificates
	I0831 15:56:04.865240    4510 provision.go:84] configureAuth start
	I0831 15:56:04.865245    4510 provision.go:143] copyHostCerts
	I0831 15:56:04.865357    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem, removing ...
	I0831 15:56:04.865369    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem
	I0831 15:56:04.865523    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem (1082 bytes)
	I0831 15:56:04.865776    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem, removing ...
	I0831 15:56:04.865781    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem
	I0831 15:56:04.865913    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem (1123 bytes)
	I0831 15:56:04.866068    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem, removing ...
	I0831 15:56:04.866072    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem
	I0831 15:56:04.866138    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem (1679 bytes)
	I0831 15:56:04.866247    4510 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-231000 san=[127.0.0.1 localhost minikube stopped-upgrade-231000]
	I0831 15:56:05.018380    4510 provision.go:177] copyRemoteCerts
	I0831 15:56:05.018430    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 15:56:05.018439    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.055333    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 15:56:05.062660    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 15:56:05.069371    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0831 15:56:05.075910    4510 provision.go:87] duration metric: took 210.668625ms to configureAuth
	I0831 15:56:05.075921    4510 buildroot.go:189] setting minikube options for container-runtime
	I0831 15:56:05.076036    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:56:05.076073    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.076193    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.076200    4510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 15:56:05.142893    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0831 15:56:05.142904    4510 buildroot.go:70] root file system type: tmpfs
	I0831 15:56:05.142964    4510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 15:56:05.143018    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.143134    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.143169    4510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 15:56:05.213813    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 15:56:05.213864    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.213982    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.213990    4510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 15:56:05.546964    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0831 15:56:05.546977    4510 machine.go:96] duration metric: took 933.996917ms to provisionDockerMachine
	I0831 15:56:05.546985    4510 start.go:293] postStartSetup for "stopped-upgrade-231000" (driver="qemu2")
	I0831 15:56:05.546992    4510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 15:56:05.547046    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 15:56:05.547055    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.583737    4510 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 15:56:05.584998    4510 info.go:137] Remote host: Buildroot 2021.02.12
	I0831 15:56:05.585005    4510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/addons for local assets ...
	I0831 15:56:05.585096    4510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/files for local assets ...
	I0831 15:56:05.585217    4510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem -> 14322.pem in /etc/ssl/certs
	I0831 15:56:05.585343    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 15:56:05.587914    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:56:05.595189    4510 start.go:296] duration metric: took 48.199417ms for postStartSetup
	I0831 15:56:05.595212    4510 fix.go:56] duration metric: took 20.949661041s for fixHost
	I0831 15:56:05.595258    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.595377    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.595383    4510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 15:56:05.664268    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725144965.190477254
	
	I0831 15:56:05.664280    4510 fix.go:216] guest clock: 1725144965.190477254
	I0831 15:56:05.664284    4510 fix.go:229] Guest: 2024-08-31 15:56:05.190477254 -0700 PDT Remote: 2024-08-31 15:56:05.595214 -0700 PDT m=+21.069833043 (delta=-404.736746ms)
	I0831 15:56:05.664301    4510 fix.go:200] guest clock delta is within tolerance: -404.736746ms
	I0831 15:56:05.664304    4510 start.go:83] releasing machines lock for "stopped-upgrade-231000", held for 21.018762833s
	I0831 15:56:05.664388    4510 ssh_runner.go:195] Run: cat /version.json
	I0831 15:56:05.664400    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.664389    4510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 15:56:05.664439    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	W0831 15:56:05.665233    4510 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50489: connect: connection refused
	I0831 15:56:05.665258    4510 retry.go:31] will retry after 307.677355ms: dial tcp [::1]:50489: connect: connection refused
	W0831 15:56:05.700320    4510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0831 15:56:05.700378    4510 ssh_runner.go:195] Run: systemctl --version
	I0831 15:56:05.702203    4510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 15:56:05.703819    4510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 15:56:05.703840    4510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0831 15:56:05.707157    4510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0831 15:56:05.711759    4510 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 15:56:05.711767    4510 start.go:495] detecting cgroup driver to use...
	I0831 15:56:05.711846    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:56:05.718722    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0831 15:56:05.721575    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 15:56:05.724232    4510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 15:56:05.724255    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 15:56:05.727457    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:56:05.730523    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 15:56:05.733174    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:56:05.736116    4510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 15:56:05.739678    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 15:56:05.742899    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 15:56:05.745813    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 15:56:05.748640    4510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 15:56:05.751652    4510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 15:56:05.754467    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:05.812683    4510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 15:56:05.819557    4510 start.go:495] detecting cgroup driver to use...
	I0831 15:56:05.819628    4510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 15:56:05.824872    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:56:05.829790    4510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 15:56:05.836920    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:56:05.841214    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:56:05.846012    4510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 15:56:05.903564    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:56:05.909530    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:56:05.915275    4510 ssh_runner.go:195] Run: which cri-dockerd
	I0831 15:56:05.916679    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 15:56:05.919367    4510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0831 15:56:05.924345    4510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 15:56:05.985249    4510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 15:56:06.045953    4510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 15:56:06.046013    4510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 15:56:06.051232    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:06.113402    4510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:56:07.243159    4510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1297735s)
	I0831 15:56:07.243216    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 15:56:07.248028    4510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0831 15:56:07.254914    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:56:07.260918    4510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 15:56:07.323474    4510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 15:56:07.387804    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:07.453076    4510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 15:56:07.459310    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:56:07.464157    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:07.525664    4510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 15:56:07.565281    4510 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 15:56:07.565373    4510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 15:56:07.567544    4510 start.go:563] Will wait 60s for crictl version
	I0831 15:56:07.567598    4510 ssh_runner.go:195] Run: which crictl
	I0831 15:56:07.568889    4510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 15:56:07.583334    4510 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0831 15:56:07.583399    4510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:56:07.601707    4510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:56:07.022685    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:07.023008    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:56:07.060388    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:56:07.060517    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:56:07.078299    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:56:07.078388    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:56:07.092029    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:56:07.092099    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:56:07.104130    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:56:07.104193    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:56:07.114944    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:56:07.114999    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:56:07.125999    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:56:07.126074    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:56:07.137016    4326 logs.go:276] 0 containers: []
	W0831 15:56:07.137029    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:56:07.137090    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:56:07.150733    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:56:07.150752    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:56:07.150758    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:56:07.169568    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:56:07.169585    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:56:07.185158    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:56:07.185169    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:07.201582    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:56:07.201596    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:56:07.214522    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:56:07.214535    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:56:07.233118    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:56:07.233140    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:56:07.246468    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:56:07.246480    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:56:07.270245    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:56:07.270255    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:56:07.305578    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:56:07.305598    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:56:07.318239    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:56:07.318250    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:56:07.338267    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:56:07.338281    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:56:07.352421    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:56:07.352435    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:56:07.389364    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:56:07.389372    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:56:07.401179    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:56:07.401192    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:56:07.405700    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:56:07.405706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:56:07.417678    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:56:07.417690    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:56:07.429534    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:56:07.429544    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:56:07.622316    4510 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0831 15:56:07.622383    4510 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0831 15:56:07.623709    4510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:56:07.627849    4510 kubeadm.go:883] updating cluster {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0831 15:56:07.627899    4510 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:56:07.627938    4510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:56:07.638212    4510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:56:07.638221    4510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:56:07.638270    4510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:56:07.641331    4510 ssh_runner.go:195] Run: which lz4
	I0831 15:56:07.642659    4510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 15:56:07.643948    4510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 15:56:07.643957    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0831 15:56:08.552343    4510 docker.go:649] duration metric: took 909.747875ms to copy over tarball
	I0831 15:56:08.552399    4510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 15:56:09.951417    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:09.720334    4510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.167956041s)
	I0831 15:56:09.720351    4510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 15:56:09.735976    4510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:56:09.739315    4510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0831 15:56:09.744621    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:09.804420    4510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:56:11.066491    4510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2620915s)
	I0831 15:56:11.066573    4510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:56:11.078769    4510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:56:11.078779    4510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:56:11.078785    4510 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0831 15:56:11.084458    4510 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.086184    4510 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.087948    4510 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.088533    4510 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.089543    4510 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.089931    4510 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.090603    4510 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.092402    4510 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.092789    4510 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.092855    4510 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0831 15:56:11.094082    4510 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.094081    4510 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.094967    4510 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0831 15:56:11.095002    4510 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.095877    4510 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.096416    4510 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.510405    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.522609    4510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0831 15:56:11.522633    4510 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.522686    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.532785    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0831 15:56:11.544365    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.545896    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0831 15:56:11.549447    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.551852    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.561526    4510 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0831 15:56:11.561555    4510 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.561611    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.569542    4510 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0831 15:56:11.569577    4510 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0831 15:56:11.569631    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0831 15:56:11.577559    4510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0831 15:56:11.577577    4510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0831 15:56:11.577583    4510 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.577586    4510 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.577632    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.577632    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.577664    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0831 15:56:11.593877    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0831 15:56:11.594000    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0831 15:56:11.594957    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0831 15:56:11.594976    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0831 15:56:11.596512    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0831 15:56:11.596525    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0831 15:56:11.603782    4510 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0831 15:56:11.603791    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0831 15:56:11.610093    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0831 15:56:11.621420    4510 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0831 15:56:11.621548    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.643213    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0831 15:56:11.643296    4510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0831 15:56:11.643317    4510 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.643372    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.649672    4510 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0831 15:56:11.649695    4510 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.649730    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.661941    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0831 15:56:11.661947    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0831 15:56:11.662067    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:56:11.663490    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0831 15:56:11.663503    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0831 15:56:11.705030    4510 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:56:11.705043    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0831 15:56:11.739935    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0831 15:56:11.897343    4510 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0831 15:56:11.897550    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.917187    4510 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0831 15:56:11.917227    4510 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.917322    4510 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.937863    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0831 15:56:11.938002    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:56:11.939673    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0831 15:56:11.939685    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0831 15:56:11.969450    4510 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:56:11.969466    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0831 15:56:12.205840    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0831 15:56:12.205878    4510 cache_images.go:92] duration metric: took 1.12711875s to LoadCachedImages
	W0831 15:56:12.205919    4510 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0831 15:56:12.205928    4510 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0831 15:56:12.205982    4510 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-231000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 15:56:12.206041    4510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 15:56:12.219524    4510 cni.go:84] Creating CNI manager for ""
	I0831 15:56:12.219536    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:56:12.219540    4510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 15:56:12.219549    4510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-231000 NodeName:stopped-upgrade-231000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 15:56:12.219619    4510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-231000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 15:56:12.219666    4510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0831 15:56:12.223050    4510 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 15:56:12.223080    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 15:56:12.225739    4510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0831 15:56:12.230602    4510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 15:56:12.235536    4510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0831 15:56:12.240931    4510 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0831 15:56:12.242152    4510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:56:12.245623    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:12.310130    4510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:56:12.319798    4510 certs.go:68] Setting up /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000 for IP: 10.0.2.15
	I0831 15:56:12.319808    4510 certs.go:194] generating shared ca certs ...
	I0831 15:56:12.319817    4510 certs.go:226] acquiring lock for ca certs: {Name:mk9afb7f1a72cfb2bc15164a899be0d76a69591d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.319982    4510 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key
	I0831 15:56:12.320030    4510 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key
	I0831 15:56:12.320038    4510 certs.go:256] generating profile certs ...
	I0831 15:56:12.320096    4510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key
	I0831 15:56:12.320115    4510 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec
	I0831 15:56:12.320129    4510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0831 15:56:12.412516    4510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec ...
	I0831 15:56:12.412529    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec: {Name:mkd4f562fa4ad0db149e90b6cef4c0661f21a8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.413896    4510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec ...
	I0831 15:56:12.413902    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec: {Name:mk6d7b1a754e8e097f3c4a5eb00b90c52f3490a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.414043    4510 certs.go:381] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt
	I0831 15:56:12.414176    4510 certs.go:385] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key
	I0831 15:56:12.414340    4510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.key
	I0831 15:56:12.414481    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem (1338 bytes)
	W0831 15:56:12.414508    4510 certs.go:480] ignoring /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432_empty.pem, impossibly tiny 0 bytes
	I0831 15:56:12.414513    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 15:56:12.414539    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem (1082 bytes)
	I0831 15:56:12.414558    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem (1123 bytes)
	I0831 15:56:12.414575    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem (1679 bytes)
	I0831 15:56:12.414618    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:56:12.414937    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 15:56:12.421998    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 15:56:12.428932    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 15:56:12.435961    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 15:56:12.443419    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 15:56:12.450771    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 15:56:12.457667    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 15:56:12.464423    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 15:56:12.471470    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem --> /usr/share/ca-certificates/1432.pem (1338 bytes)
	I0831 15:56:12.478782    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /usr/share/ca-certificates/14322.pem (1708 bytes)
	I0831 15:56:12.485587    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 15:56:12.492428    4510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 15:56:12.497522    4510 ssh_runner.go:195] Run: openssl version
	I0831 15:56:12.499334    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1432.pem && ln -fs /usr/share/ca-certificates/1432.pem /etc/ssl/certs/1432.pem"
	I0831 15:56:12.502700    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.504130    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:20 /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.504149    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.506004    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1432.pem /etc/ssl/certs/51391683.0"
	I0831 15:56:12.508863    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14322.pem && ln -fs /usr/share/ca-certificates/14322.pem /etc/ssl/certs/14322.pem"
	I0831 15:56:12.512226    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.513702    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:20 /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.513724    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.515466    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14322.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 15:56:12.519114    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 15:56:12.522249    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.523721    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.523743    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.525531    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 15:56:12.528401    4510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 15:56:12.529957    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 15:56:12.531773    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 15:56:12.534590    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 15:56:12.536692    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 15:56:12.538610    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 15:56:12.540328    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 15:56:12.542350    4510 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:56:12.542413    4510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:56:12.552609    4510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 15:56:12.556946    4510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 15:56:12.556952    4510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 15:56:12.556978    4510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 15:56:12.560243    4510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:56:12.560542    4510 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-231000" does not appear in /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:56:12.560639    4510 kubeconfig.go:62] /Users/jenkins/minikube-integration/18943-972/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-231000" cluster setting kubeconfig missing "stopped-upgrade-231000" context setting]
	I0831 15:56:12.560822    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.561271    4510 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106638030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 15:56:12.561587    4510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 15:56:12.564257    4510 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-231000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0831 15:56:12.564263    4510 kubeadm.go:1160] stopping kube-system containers ...
	I0831 15:56:12.564301    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:56:12.576768    4510 docker.go:483] Stopping containers: [bd81755e0db4 8c8721bd1a75 c5f4cc8bd872 f4db5f2a3437 4873f60be323 63e4154d4309 effb4c0615cc b4a24276698b]
	I0831 15:56:12.576832    4510 ssh_runner.go:195] Run: docker stop bd81755e0db4 8c8721bd1a75 c5f4cc8bd872 f4db5f2a3437 4873f60be323 63e4154d4309 effb4c0615cc b4a24276698b
	I0831 15:56:12.587357    4510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0831 15:56:12.593269    4510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:56:12.596307    4510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 15:56:12.596315    4510 kubeadm.go:157] found existing configuration files:
	
	I0831 15:56:12.596353    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf
	I0831 15:56:12.599491    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 15:56:12.599523    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:56:12.602795    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf
	I0831 15:56:12.605327    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 15:56:12.605358    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:56:12.607970    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf
	I0831 15:56:12.610649    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 15:56:12.610671    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:56:12.613239    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf
	I0831 15:56:12.615844    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 15:56:12.615867    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:56:12.619029    4510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:56:12.622196    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:12.646401    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.034063    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.145112    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.171200    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.191995    4510 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:56:13.192074    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:13.694127    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:14.194093    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:14.198295    4510 api_server.go:72] duration metric: took 1.006330625s to wait for apiserver process to appear ...
	I0831 15:56:14.198304    4510 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:56:14.198313    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:14.953471    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:14.953572    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:56:14.964677    4326 logs.go:276] 2 containers: [34010c9065bf fa3500a174d2]
	I0831 15:56:14.964751    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:56:14.976071    4326 logs.go:276] 2 containers: [40a1ee10a7d0 e2ac9d4d3baa]
	I0831 15:56:14.976150    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:56:14.987326    4326 logs.go:276] 1 containers: [e08e2dd19f17]
	I0831 15:56:14.987401    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:56:14.998536    4326 logs.go:276] 2 containers: [9e80fa74d2cd 383c3cd78d19]
	I0831 15:56:14.998606    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:56:15.012354    4326 logs.go:276] 1 containers: [845611e781c4]
	I0831 15:56:15.012422    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:56:15.023422    4326 logs.go:276] 2 containers: [34b48c43d533 803a82350bfc]
	I0831 15:56:15.023490    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:56:15.033866    4326 logs.go:276] 0 containers: []
	W0831 15:56:15.033876    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:56:15.033933    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:56:15.045094    4326 logs.go:276] 2 containers: [d36b32cb40dc cc1187c71243]
	I0831 15:56:15.045111    4326 logs.go:123] Gathering logs for etcd [e2ac9d4d3baa] ...
	I0831 15:56:15.045117    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ac9d4d3baa"
	I0831 15:56:15.063159    4326 logs.go:123] Gathering logs for storage-provisioner [d36b32cb40dc] ...
	I0831 15:56:15.063169    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36b32cb40dc"
	I0831 15:56:15.075255    4326 logs.go:123] Gathering logs for storage-provisioner [cc1187c71243] ...
	I0831 15:56:15.075265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc1187c71243"
	I0831 15:56:15.090873    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:56:15.090884    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:56:15.103024    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:56:15.103038    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:56:15.125228    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:56:15.125238    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:56:15.129308    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:56:15.129314    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:56:15.166275    4326 logs.go:123] Gathering logs for kube-apiserver [fa3500a174d2] ...
	I0831 15:56:15.166291    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3500a174d2"
	I0831 15:56:15.196477    4326 logs.go:123] Gathering logs for coredns [e08e2dd19f17] ...
	I0831 15:56:15.196487    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08e2dd19f17"
	I0831 15:56:15.207770    4326 logs.go:123] Gathering logs for kube-scheduler [9e80fa74d2cd] ...
	I0831 15:56:15.207782    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e80fa74d2cd"
	I0831 15:56:15.223786    4326 logs.go:123] Gathering logs for kube-controller-manager [34b48c43d533] ...
	I0831 15:56:15.223796    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b48c43d533"
	I0831 15:56:15.241297    4326 logs.go:123] Gathering logs for etcd [40a1ee10a7d0] ...
	I0831 15:56:15.241307    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a1ee10a7d0"
	I0831 15:56:15.255036    4326 logs.go:123] Gathering logs for kube-proxy [845611e781c4] ...
	I0831 15:56:15.255045    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 845611e781c4"
	I0831 15:56:15.267335    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:56:15.267344    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:56:15.304137    4326 logs.go:123] Gathering logs for kube-apiserver [34010c9065bf] ...
	I0831 15:56:15.304146    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34010c9065bf"
	I0831 15:56:15.317914    4326 logs.go:123] Gathering logs for kube-scheduler [383c3cd78d19] ...
	I0831 15:56:15.317922    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 383c3cd78d19"
	I0831 15:56:15.331579    4326 logs.go:123] Gathering logs for kube-controller-manager [803a82350bfc] ...
	I0831 15:56:15.331595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 803a82350bfc"
	I0831 15:56:17.844731    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:19.200247    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:19.200270    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:22.846785    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:22.846858    4326 kubeadm.go:597] duration metric: took 4m3.28939825s to restartPrimaryControlPlane
	W0831 15:56:22.846928    4326 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0831 15:56:22.846950    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0831 15:56:23.852002    4326 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005069084s)
	I0831 15:56:23.852086    4326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 15:56:23.857226    4326 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:56:23.860190    4326 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:56:23.863050    4326 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 15:56:23.863056    4326 kubeadm.go:157] found existing configuration files:
	
	I0831 15:56:23.863078    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0831 15:56:23.865951    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 15:56:23.865978    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:56:23.868768    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0831 15:56:23.871669    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 15:56:23.871697    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:56:23.874807    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0831 15:56:23.877372    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 15:56:23.877399    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:56:23.880322    4326 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0831 15:56:23.883392    4326 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 15:56:23.883416    4326 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:56:23.886271    4326 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 15:56:23.900868    4326 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0831 15:56:23.900898    4326 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 15:56:23.956985    4326 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 15:56:23.957052    4326 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 15:56:23.957116    4326 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 15:56:24.006158    4326 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 15:56:24.200326    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:24.200340    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:24.010366    4326 out.go:235]   - Generating certificates and keys ...
	I0831 15:56:24.010398    4326 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 15:56:24.010437    4326 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 15:56:24.010476    4326 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 15:56:24.010505    4326 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 15:56:24.010543    4326 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 15:56:24.010573    4326 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 15:56:24.010607    4326 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 15:56:24.010650    4326 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 15:56:24.010690    4326 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 15:56:24.010729    4326 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 15:56:24.010745    4326 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 15:56:24.010768    4326 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 15:56:24.135818    4326 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 15:56:24.274631    4326 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 15:56:24.322654    4326 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 15:56:24.490637    4326 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 15:56:24.522298    4326 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 15:56:24.522704    4326 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 15:56:24.522753    4326 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 15:56:24.596636    4326 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 15:56:24.600970    4326 out.go:235]   - Booting up control plane ...
	I0831 15:56:24.601017    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 15:56:24.601058    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 15:56:24.601103    4326 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 15:56:24.601149    4326 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 15:56:24.601305    4326 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 15:56:29.200484    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:29.200525    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:29.100077    4326 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503516 seconds
	I0831 15:56:29.100258    4326 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 15:56:29.104724    4326 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 15:56:29.614717    4326 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 15:56:29.614895    4326 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-137000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 15:56:30.118816    4326 kubeadm.go:310] [bootstrap-token] Using token: qryssf.o5pm00f14c813c2u
	I0831 15:56:30.125369    4326 out.go:235]   - Configuring RBAC rules ...
	I0831 15:56:30.125421    4326 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 15:56:30.125458    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 15:56:30.128862    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 15:56:30.129840    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 15:56:30.130692    4326 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 15:56:30.131572    4326 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 15:56:30.135045    4326 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 15:56:30.292697    4326 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 15:56:30.523944    4326 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 15:56:30.524416    4326 kubeadm.go:310] 
	I0831 15:56:30.524449    4326 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 15:56:30.524452    4326 kubeadm.go:310] 
	I0831 15:56:30.524497    4326 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 15:56:30.524506    4326 kubeadm.go:310] 
	I0831 15:56:30.524529    4326 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 15:56:30.524560    4326 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 15:56:30.524586    4326 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 15:56:30.524588    4326 kubeadm.go:310] 
	I0831 15:56:30.524611    4326 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 15:56:30.524613    4326 kubeadm.go:310] 
	I0831 15:56:30.524637    4326 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 15:56:30.524676    4326 kubeadm.go:310] 
	I0831 15:56:30.524705    4326 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 15:56:30.524777    4326 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 15:56:30.524813    4326 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 15:56:30.524848    4326 kubeadm.go:310] 
	I0831 15:56:30.524891    4326 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 15:56:30.524942    4326 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 15:56:30.524947    4326 kubeadm.go:310] 
	I0831 15:56:30.525001    4326 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qryssf.o5pm00f14c813c2u \
	I0831 15:56:30.525066    4326 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 \
	I0831 15:56:30.525079    4326 kubeadm.go:310] 	--control-plane 
	I0831 15:56:30.525081    4326 kubeadm.go:310] 
	I0831 15:56:30.525161    4326 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 15:56:30.525167    4326 kubeadm.go:310] 
	I0831 15:56:30.525206    4326 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qryssf.o5pm00f14c813c2u \
	I0831 15:56:30.525262    4326 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 
	I0831 15:56:30.525320    4326 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 15:56:30.525329    4326 cni.go:84] Creating CNI manager for ""
	I0831 15:56:30.525336    4326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:56:30.529848    4326 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 15:56:30.536799    4326 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 15:56:30.539816    4326 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 15:56:30.544436    4326 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 15:56:30.544477    4326 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 15:56:30.544486    4326 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-137000 minikube.k8s.io/updated_at=2024_08_31T15_56_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=running-upgrade-137000 minikube.k8s.io/primary=true
	I0831 15:56:30.587463    4326 ops.go:34] apiserver oom_adj: -16
	I0831 15:56:30.587463    4326 kubeadm.go:1113] duration metric: took 43.019917ms to wait for elevateKubeSystemPrivileges
	I0831 15:56:30.587509    4326 kubeadm.go:394] duration metric: took 4m11.051510875s to StartCluster
	I0831 15:56:30.587522    4326 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:30.587616    4326 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:56:30.588010    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:30.588212    4326 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:56:30.588268    4326 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 15:56:30.588308    4326 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-137000"
	I0831 15:56:30.588321    4326 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-137000"
	W0831 15:56:30.588326    4326 addons.go:243] addon storage-provisioner should already be in state true
	I0831 15:56:30.588337    4326 host.go:66] Checking if "running-upgrade-137000" exists ...
	I0831 15:56:30.588343    4326 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-137000"
	I0831 15:56:30.588376    4326 config.go:182] Loaded profile config "running-upgrade-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:56:30.588385    4326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-137000"
	I0831 15:56:30.592726    4326 out.go:177] * Verifying Kubernetes components...
	I0831 15:56:30.593347    4326 kapi.go:59] client config for running-upgrade-137000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/running-upgrade-137000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c98030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 15:56:30.596997    4326 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-137000"
	W0831 15:56:30.597002    4326 addons.go:243] addon default-storageclass should already be in state true
	I0831 15:56:30.597009    4326 host.go:66] Checking if "running-upgrade-137000" exists ...
	I0831 15:56:30.597498    4326 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 15:56:30.597503    4326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 15:56:30.597508    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:56:30.600789    4326 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:30.604740    4326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:30.608809    4326 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:56:30.608815    4326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 15:56:30.608821    4326 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/running-upgrade-137000/id_rsa Username:docker}
	I0831 15:56:30.679460    4326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:56:30.685334    4326 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:56:30.685375    4326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:30.689590    4326 api_server.go:72] duration metric: took 101.369209ms to wait for apiserver process to appear ...
	I0831 15:56:30.689597    4326 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:56:30.689603    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:30.704360    4326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 15:56:30.723696    4326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 15:56:31.046340    4326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 15:56:31.046353    4326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 15:56:34.200821    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:34.200880    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:35.691538    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:35.691598    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:39.201285    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:39.201330    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:40.691679    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:40.691720    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:44.202002    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:44.202024    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:45.691890    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:45.691980    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:49.202781    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:49.202860    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:50.692238    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:50.692264    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:54.204040    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:54.204084    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:55.692878    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:55.692926    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:59.205529    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:59.205607    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:00.693553    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:00.693602    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0831 15:57:01.046950    4326 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0831 15:57:01.055255    4326 out.go:177] * Enabled addons: storage-provisioner
	I0831 15:57:01.062238    4326 addons.go:510] duration metric: took 30.474875625s for enable addons: enabled=[storage-provisioner]
	I0831 15:57:04.207856    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:04.207894    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:05.693885    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:05.693915    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:09.209326    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:09.209409    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:10.694875    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:10.694911    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:14.211886    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:14.212096    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:14.235354    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:14.235457    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:14.250754    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:14.250836    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:14.265853    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:14.265931    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:14.276919    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:14.277000    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:14.286959    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:14.287031    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:14.297645    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:14.297710    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:14.308118    4510 logs.go:276] 0 containers: []
	W0831 15:57:14.308131    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:14.308200    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:14.319074    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:14.319091    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:14.319096    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:14.323584    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:14.323592    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:14.342616    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:14.342627    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:14.359239    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:14.359254    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:14.373014    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:14.373025    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:14.390416    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:14.390427    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:14.401439    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:14.401449    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:14.412656    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:14.412666    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:14.438184    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:14.438195    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:14.450100    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:14.450111    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:14.467494    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:14.467504    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:14.504725    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:14.504734    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:14.532293    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:14.532304    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:14.545925    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:14.545935    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:15.696111    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:15.696136    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:14.557184    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:14.557195    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:14.648476    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:14.648489    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:14.662458    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:14.662471    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:17.179545    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:20.697688    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:20.697747    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:22.181768    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:22.181944    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:22.196772    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:22.196852    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:22.207835    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:22.207910    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:22.218120    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:22.218178    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:22.228769    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:22.228857    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:22.239122    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:22.239195    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:22.249660    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:22.249727    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:22.259250    4510 logs.go:276] 0 containers: []
	W0831 15:57:22.259260    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:22.259313    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:22.272645    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:22.272667    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:22.272672    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:22.288257    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:22.288265    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:22.312247    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:22.312255    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:22.337294    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:22.337305    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:22.341597    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:22.341603    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:22.355660    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:22.355670    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:22.370290    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:22.370299    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:22.381843    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:22.381853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:22.399764    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:22.399773    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:22.437370    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:22.437380    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:22.450913    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:22.450926    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:22.469192    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:22.469202    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:22.480834    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:22.480846    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:22.517617    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:22.517630    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:22.530716    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:22.530727    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:22.542511    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:22.542523    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:22.554423    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:22.554434    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:25.699849    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:25.699890    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:25.067985    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:30.702042    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:30.702141    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:30.712984    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:30.713063    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:30.723662    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:30.723730    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:30.734736    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:30.734802    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:30.744720    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:30.744787    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:30.762411    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:30.762484    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:30.779445    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:30.779526    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:30.793634    4326 logs.go:276] 0 containers: []
	W0831 15:57:30.793646    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:30.793707    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:30.807550    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:30.807565    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:30.807571    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:30.843335    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:30.843348    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:30.858001    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:30.858014    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:30.874811    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:30.874829    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:30.886644    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:30.886655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:30.899313    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:30.899325    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:30.917149    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:30.917160    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:30.928880    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:30.928894    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:30.952492    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:30.952502    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:30.957041    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:30.957051    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:30.993466    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:30.993476    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:31.008351    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:31.008365    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:31.019707    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:31.019717    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:33.536753    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:30.070301    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:30.070754    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:30.109112    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:30.109240    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:30.130457    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:30.130548    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:30.144992    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:30.145072    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:30.157375    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:30.157461    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:30.167991    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:30.168055    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:30.178427    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:30.178499    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:30.188836    4510 logs.go:276] 0 containers: []
	W0831 15:57:30.188846    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:30.188906    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:30.199931    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:30.199958    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:30.199965    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:30.211853    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:30.211863    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:30.223398    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:30.223407    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:30.262885    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:30.262895    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:30.277350    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:30.277361    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:30.295061    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:30.295072    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:30.332137    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:30.332151    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:30.358171    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:30.358183    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:30.370618    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:30.370629    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:30.396686    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:30.396696    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:30.408286    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:30.408299    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:30.425519    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:30.425532    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:30.442568    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:30.442579    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:30.459743    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:30.459753    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:30.471276    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:30.471286    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:30.475942    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:30.475956    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:30.490451    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:30.490464    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:33.008006    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:38.538933    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:38.539092    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:38.550958    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:38.551026    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:38.562931    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:38.563015    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:38.573230    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:38.573298    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:38.587195    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:38.587259    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:38.597776    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:38.597845    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:38.610845    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:38.610912    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:38.621024    4326 logs.go:276] 0 containers: []
	W0831 15:57:38.621035    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:38.621092    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:38.631495    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:38.631510    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:38.631516    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:38.643261    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:38.643275    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:38.010523    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:38.010666    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:38.022690    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:38.022767    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:38.033054    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:38.033117    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:38.043522    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:38.043594    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:38.057597    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:38.057661    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:38.067662    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:38.067733    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:38.078199    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:38.078265    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:38.088340    4510 logs.go:276] 0 containers: []
	W0831 15:57:38.088351    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:38.088409    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:38.099273    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:38.099290    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:38.099295    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:38.110394    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:38.110405    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:38.123252    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:38.123265    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:38.142214    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:38.142226    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:38.153064    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:38.153077    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:38.178584    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:38.178605    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:38.212678    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:38.212690    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:38.238153    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:38.238166    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:38.255601    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:38.255613    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:38.270617    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:38.270628    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:38.307834    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:38.307840    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:38.321529    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:38.321540    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:38.333492    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:38.333503    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:38.349219    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:38.349230    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:38.364069    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:38.364083    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:38.376684    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:38.376695    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:38.394074    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:38.394088    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:38.666948    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:38.666958    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:38.671402    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:38.671409    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:38.707211    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:38.707222    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:38.721994    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:38.722009    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:38.735921    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:38.735932    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:38.747102    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:38.747116    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:38.768911    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:38.768923    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:38.802807    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:38.802820    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:38.817225    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:38.817234    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:38.832497    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:38.832508    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:38.844389    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:38.844402    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:41.357582    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:40.900214    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:46.359695    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:46.359792    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:46.372857    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:46.372938    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:46.384105    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:46.384174    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:46.394976    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:46.395037    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:46.409526    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:46.409595    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:46.420115    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:46.420178    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:46.430730    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:46.430793    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:46.440972    4326 logs.go:276] 0 containers: []
	W0831 15:57:46.440984    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:46.441035    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:46.451075    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:46.451089    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:46.451094    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:46.484146    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:46.484158    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:46.502194    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:46.502205    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:46.515463    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:46.515474    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:46.527100    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:46.527109    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:46.539323    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:46.539337    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:46.556690    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:46.556701    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:46.580366    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:46.580375    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:46.584841    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:46.584849    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:46.620553    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:46.620565    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:46.632083    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:46.632094    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:46.652828    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:46.652840    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:46.663859    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:46.663871    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:45.901833    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:45.901945    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:45.912637    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:45.912715    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:45.923936    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:45.924006    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:45.938292    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:45.938360    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:45.949213    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:45.949282    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:45.959330    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:45.959393    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:45.969825    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:45.969894    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:45.980322    4510 logs.go:276] 0 containers: []
	W0831 15:57:45.980332    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:45.980386    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:45.990654    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:45.990672    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:45.990678    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:46.025720    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:46.025734    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:46.039969    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:46.039979    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:46.056571    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:46.056581    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:46.071200    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:46.071213    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:46.082663    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:46.082672    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:46.094342    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:46.094354    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:46.132035    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:46.132043    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:46.143119    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:46.143131    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:46.156914    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:46.156924    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:46.161067    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:46.161073    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:46.187488    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:46.187502    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:46.204734    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:46.204744    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:46.219038    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:46.219049    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:46.234043    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:46.234054    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:46.246030    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:46.246040    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:46.256944    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:46.256956    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:48.782228    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:49.177911    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:53.784484    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:53.784683    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:53.801615    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:53.801705    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:53.815138    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:53.815204    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:53.826857    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:53.826923    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:53.837213    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:53.837292    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:53.847841    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:53.847900    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:53.858425    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:53.858487    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:53.868919    4510 logs.go:276] 0 containers: []
	W0831 15:57:53.868931    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:53.868992    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:53.883151    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:53.883169    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:53.883176    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:53.922982    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:53.922993    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:53.937187    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:53.937197    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:53.949486    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:53.949500    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:53.970800    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:53.970811    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:53.985574    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:53.985587    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:54.001334    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:54.001344    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:54.013624    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:54.013635    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:54.038281    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:54.038292    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:54.055787    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:54.055799    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:54.067793    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:54.067805    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:54.079041    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:54.079051    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:54.104241    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:54.104249    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:54.143256    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:54.143269    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:54.147454    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:54.147464    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:54.161538    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:54.161549    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:54.173399    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:54.173411    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:54.180310    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:54.180419    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:54.192128    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:57:54.192201    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:54.203169    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:57:54.203237    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:54.214905    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:57:54.214977    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:54.227811    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:57:54.227880    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:54.238086    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:57:54.238158    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:54.248820    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:57:54.248892    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:54.259192    4326 logs.go:276] 0 containers: []
	W0831 15:57:54.259206    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:54.259259    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:54.269466    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:57:54.269482    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:57:54.269489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:57:54.285325    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:54.285337    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:54.308425    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:54.308436    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:54.340278    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:54.340288    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:54.377777    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:57:54.377789    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:57:54.391647    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:57:54.391660    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:57:54.403028    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:57:54.403042    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:57:54.419211    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:57:54.419221    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:57:54.436331    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:54.436345    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:54.440864    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:57:54.440871    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:57:54.454816    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:57:54.454826    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:57:54.466421    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:57:54.466430    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:57:54.477782    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:57:54.477793    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:56.991965    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:56.691184    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:01.994029    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:01.994118    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:02.005493    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:02.005565    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:02.017463    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:02.017528    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:02.029242    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:02.029324    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:02.040202    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:02.040273    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:02.052747    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:02.052823    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:02.064753    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:02.064825    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:02.076198    4326 logs.go:276] 0 containers: []
	W0831 15:58:02.076209    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:02.076266    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:02.087987    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:02.088001    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:02.088007    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:02.101038    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:02.101049    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:02.116561    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:02.116571    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:02.129891    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:02.129901    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:02.145582    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:02.145593    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:02.178229    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:02.178240    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:02.183068    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:02.183076    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:02.218584    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:02.218594    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:02.233366    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:02.233386    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:02.257729    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:02.257742    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:02.268865    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:02.268876    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:02.287940    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:02.287950    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:02.299414    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:02.299428    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:01.691495    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:01.691919    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:01.722413    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:01.722526    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:01.740229    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:01.740316    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:01.754284    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:01.754360    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:01.768462    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:01.768529    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:01.778758    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:01.778831    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:01.788970    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:01.789035    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:01.798919    4510 logs.go:276] 0 containers: []
	W0831 15:58:01.798931    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:01.798988    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:01.809789    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:01.809807    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:01.809814    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:01.824134    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:01.824145    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:01.835767    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:01.835779    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:01.874808    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:01.874823    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:01.891547    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:01.891560    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:01.908499    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:01.908512    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:01.920022    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:01.920033    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:01.933318    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:01.933332    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:01.953498    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:01.953511    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:01.967577    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:01.967589    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:01.978739    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:01.978750    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:01.995830    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:01.995838    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:02.008410    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:02.008420    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:02.057442    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:02.057455    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:02.084852    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:02.084866    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:02.097752    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:02.097764    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:02.124283    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:02.124300    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:04.822411    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:04.630870    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:09.824522    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:09.824598    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:09.846228    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:09.846379    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:09.858661    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:09.858728    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:09.870090    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:09.870156    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:09.881453    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:09.881521    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:09.893315    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:09.893384    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:09.905047    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:09.905127    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:09.916500    4326 logs.go:276] 0 containers: []
	W0831 15:58:09.916512    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:09.916574    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:09.928108    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:09.928124    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:09.928129    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:09.947369    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:09.947380    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:09.974111    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:09.974128    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:10.010592    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:10.010616    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:10.016040    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:10.016053    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:10.045028    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:10.045039    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:10.057384    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:10.057397    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:10.072540    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:10.072554    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:10.088225    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:10.088237    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:10.099515    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:10.099527    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:10.135305    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:10.135319    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:10.149162    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:10.149174    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:10.160509    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:10.160522    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:12.672771    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:09.633018    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:09.633253    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:09.655057    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:09.655162    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:09.669666    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:09.669739    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:09.681812    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:09.681874    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:09.692464    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:09.692534    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:09.705253    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:09.705332    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:09.716256    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:09.716319    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:09.726825    4510 logs.go:276] 0 containers: []
	W0831 15:58:09.726835    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:09.726889    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:09.737414    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:09.737433    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:09.737439    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:09.748515    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:09.748527    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:09.773894    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:09.773909    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:09.788302    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:09.788313    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:09.803524    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:09.803534    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:09.828048    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:09.828057    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:09.840261    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:09.840273    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:09.880203    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:09.880222    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:09.895538    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:09.895549    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:09.913836    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:09.913853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:09.930293    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:09.930305    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:09.949360    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:09.949368    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:09.964159    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:09.964169    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:10.001763    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:10.001774    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:10.019402    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:10.019415    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:10.031741    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:10.031756    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:10.043794    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:10.043806    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:12.550254    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:17.672984    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:17.673126    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:17.688863    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:17.688928    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:17.700626    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:17.700738    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:17.713030    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:17.713104    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:17.725165    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:17.725239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:17.736096    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:17.736166    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:17.747848    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:17.747922    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:17.759485    4326 logs.go:276] 0 containers: []
	W0831 15:58:17.759515    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:17.759576    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:17.771103    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:17.771123    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:17.771131    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:17.787213    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:17.787225    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:17.804254    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:17.804267    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:17.816597    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:17.816607    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:17.842058    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:17.842071    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:17.861118    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:17.861135    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:17.896825    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:17.896845    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:17.901956    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:17.901965    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:17.942950    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:17.942961    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:17.957814    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:17.957827    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:17.973304    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:17.973314    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:17.985771    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:17.985786    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:17.998816    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:17.998828    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:17.550928    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:17.551152    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:17.571672    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:17.571768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:17.586036    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:17.586108    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:17.600286    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:17.600351    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:17.611082    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:17.611159    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:17.621545    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:17.621610    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:17.632233    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:17.632299    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:17.642209    4510 logs.go:276] 0 containers: []
	W0831 15:58:17.642220    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:17.642275    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:17.656730    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:17.656752    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:17.656758    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:17.672441    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:17.672451    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:17.686006    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:17.686018    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:17.709745    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:17.709757    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:17.735355    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:17.735374    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:17.773572    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:17.773583    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:17.789840    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:17.789850    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:17.794561    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:17.794571    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:17.823128    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:17.823142    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:17.838312    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:17.838323    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:17.850054    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:17.850066    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:17.868081    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:17.868093    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:17.880259    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:17.880270    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:17.900134    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:17.900147    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:17.912631    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:17.912648    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:17.952539    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:17.952554    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:17.967758    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:17.967770    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:20.516953    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:20.492574    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:25.519165    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:25.519243    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:25.535381    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:25.535462    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:25.548952    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:25.549031    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:25.565707    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:25.565782    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:25.577083    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:25.577132    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:25.588640    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:25.588696    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:25.601702    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:25.601766    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:25.614563    4326 logs.go:276] 0 containers: []
	W0831 15:58:25.614575    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:25.614637    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:25.626084    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:25.626100    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:25.626105    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:25.638325    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:25.638334    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:25.657183    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:25.657194    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:25.682864    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:25.682876    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:25.720198    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:25.720214    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:25.725368    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:25.725378    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:25.766627    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:25.766640    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:25.780238    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:25.780252    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:25.796347    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:25.796364    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:25.808447    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:25.808459    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:25.824255    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:25.824265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:25.839540    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:25.839556    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:25.852623    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:25.852637    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:28.367301    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:25.494876    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:25.495113    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:25.518117    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:25.518216    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:25.536318    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:25.536364    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:25.549188    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:25.549220    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:25.561365    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:25.561436    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:25.575764    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:25.575839    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:25.587247    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:25.587323    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:25.598511    4510 logs.go:276] 0 containers: []
	W0831 15:58:25.598523    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:25.598580    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:25.610205    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:25.610225    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:25.610231    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:25.622269    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:25.622284    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:25.636599    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:25.636610    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:25.662978    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:25.662987    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:25.705929    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:25.705946    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:25.710780    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:25.710787    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:25.729030    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:25.729040    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:25.756273    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:25.756296    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:25.769546    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:25.769557    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:25.785058    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:25.785070    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:25.797020    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:25.797029    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:25.810094    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:25.810104    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:25.847481    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:25.847497    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:25.862516    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:25.862528    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:25.881230    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:25.881244    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:25.893016    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:25.893027    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:25.910725    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:25.910735    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:28.429952    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:33.369450    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:33.369627    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:33.383795    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:33.383871    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:33.395315    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:33.395391    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:33.405764    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:33.405836    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:33.416088    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:33.416154    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:33.426222    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:33.426299    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:33.437331    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:33.437400    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:33.448765    4326 logs.go:276] 0 containers: []
	W0831 15:58:33.448776    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:33.448833    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:33.459791    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:33.459811    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:33.459817    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:33.496134    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:33.496150    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:33.535230    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:33.535242    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:33.551208    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:33.551220    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:33.564255    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:33.564279    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:33.588791    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:33.588804    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:33.600941    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:33.600954    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:33.606284    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:33.606298    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:33.622444    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:33.622454    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:33.637170    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:33.637186    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:33.650019    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:33.650029    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:33.431990    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:33.432051    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:33.443635    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:33.443706    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:33.456207    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:33.456280    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:33.472175    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:33.472244    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:33.485145    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:33.485216    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:33.496339    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:33.496408    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:33.508734    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:33.508821    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:33.519896    4510 logs.go:276] 0 containers: []
	W0831 15:58:33.519910    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:33.519972    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:33.531457    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:33.531479    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:33.531486    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:33.557712    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:33.557726    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:33.571062    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:33.571078    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:33.585008    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:33.585022    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:33.597404    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:33.597417    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:33.602214    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:33.602225    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:33.621174    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:33.621185    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:33.637221    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:33.637229    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:33.657466    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:33.657483    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:33.673839    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:33.673854    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:33.686529    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:33.686540    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:33.706469    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:33.706483    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:33.732357    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:33.732373    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:33.749332    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:33.749343    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:33.765526    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:33.765539    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:33.777999    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:33.778010    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:33.815798    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:33.815810    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:33.662261    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:33.662273    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:33.682559    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:33.682572    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:36.197802    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:36.352617    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:41.200182    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:41.200627    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:41.241199    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:41.241332    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:41.261518    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:41.261611    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:41.276671    4326 logs.go:276] 2 containers: [ba21391de275 607c83286101]
	I0831 15:58:41.276749    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:41.289086    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:41.289160    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:41.300115    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:41.300185    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:41.310659    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:41.310726    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:41.321399    4326 logs.go:276] 0 containers: []
	W0831 15:58:41.321411    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:41.321471    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:41.331880    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:41.331895    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:41.331900    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:41.346414    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:41.346424    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:41.361113    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:41.361130    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:41.373619    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:41.373630    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:41.399988    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:41.400006    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:41.435991    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:41.436008    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:41.441314    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:41.441324    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:41.479444    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:41.479456    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:41.492978    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:41.492989    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:41.511530    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:41.511541    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:41.525043    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:41.525056    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:41.537377    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:41.537389    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:41.550039    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:41.550051    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:41.354731    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:41.354819    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:41.366149    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:41.366227    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:41.378799    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:41.378871    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:41.389871    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:41.389937    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:41.401106    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:41.401178    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:41.413040    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:41.413109    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:41.424223    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:41.424290    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:41.438639    4510 logs.go:276] 0 containers: []
	W0831 15:58:41.438651    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:41.438712    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:41.450123    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:41.450142    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:41.450148    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:41.455242    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:41.455252    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:41.469985    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:41.470000    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:41.482836    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:41.482848    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:41.522698    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:41.522713    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:41.541553    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:41.541565    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:41.554149    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:41.554161    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:41.566786    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:41.566797    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:41.586940    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:41.586951    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:41.603543    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:41.603555    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:41.627101    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:41.627113    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:41.642010    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:41.642022    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:41.654850    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:41.654867    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:41.679800    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:41.679820    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:41.693491    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:41.693503    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:41.735350    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:41.735365    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:41.748633    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:41.748645    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:44.277914    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:44.067833    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:49.278101    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:49.278184    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:49.289769    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:49.289838    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:49.301421    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:49.301503    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:49.314780    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:49.314849    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:49.326584    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:49.326654    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:49.338486    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:49.338547    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:49.349844    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:49.349918    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:49.369094    4510 logs.go:276] 0 containers: []
	W0831 15:58:49.369107    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:49.369167    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:49.380988    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:49.381007    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:49.381014    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:49.400291    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:49.400307    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:49.416079    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:49.416097    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:49.444948    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:49.444964    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:49.457489    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:49.457499    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:49.469052    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:49.469064    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:49.506934    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:49.506944    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:49.542210    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:49.542224    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:49.070024    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:49.070241    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:49.089619    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:49.089718    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:49.104237    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:49.104315    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:49.116405    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:58:49.116480    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:49.126994    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:49.127062    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:49.137925    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:49.137984    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:49.148136    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:49.148204    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:49.158228    4326 logs.go:276] 0 containers: []
	W0831 15:58:49.158239    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:49.158294    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:49.168847    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:49.168865    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:49.168870    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:49.181187    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:49.181198    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:49.202311    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:49.202321    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:49.228564    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:49.228574    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:49.261741    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:49.261755    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:49.276915    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:49.276928    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:49.289817    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:49.289827    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:49.308539    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:49.308551    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:49.351163    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:58:49.351173    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:58:49.364718    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:58:49.364733    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:58:49.376745    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:49.376758    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:49.389363    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:49.389380    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:49.394600    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:49.394612    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:49.409736    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:49.409749    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:49.426238    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:49.426254    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:51.941612    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:49.558307    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:49.558319    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:49.575995    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:49.576006    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:49.600295    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:49.600304    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:49.604546    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:49.604556    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:49.622667    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:49.622680    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:49.633914    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:49.633927    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:49.653098    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:49.653111    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:49.668394    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:49.668405    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:49.680597    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:49.680606    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:52.195046    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:56.943824    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:56.944050    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:56.966804    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:58:56.966891    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:56.981870    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:58:56.981939    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:56.996435    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:58:56.996508    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:57.007094    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:58:57.007153    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:57.017000    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:58:57.017066    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:57.027376    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:58:57.027441    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:57.037820    4326 logs.go:276] 0 containers: []
	W0831 15:58:57.037830    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:57.037886    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:57.048262    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:58:57.048281    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:57.048286    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:57.082067    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:57.082079    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:57.086802    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:58:57.086812    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:58:57.101479    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:58:57.101489    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:58:57.112748    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:58:57.112761    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:58:57.124035    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:58:57.124046    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:58:57.141796    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:58:57.141806    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:58:57.156176    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:58:57.156189    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:58:57.170971    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:58:57.170980    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:58:57.183013    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:58:57.183024    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:58:57.194631    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:58:57.194642    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:57.208661    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:57.208670    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:57.247002    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:58:57.247012    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:58:57.261823    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:57.261834    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:57.289001    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:58:57.289016    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:58:57.195641    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:57.195714    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:57.207008    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:57.207081    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:57.219057    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:57.219132    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:57.230840    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:57.230907    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:57.243034    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:57.243115    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:57.256293    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:57.256361    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:57.272102    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:57.272175    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:57.283111    4510 logs.go:276] 0 containers: []
	W0831 15:58:57.283125    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:57.283187    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:57.295927    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:57.295944    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:57.295951    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:57.308587    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:57.308601    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:57.313330    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:57.313336    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:57.334089    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:57.334103    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:57.345799    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:57.345813    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:57.375846    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:57.375860    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:57.420048    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:57.420062    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:57.434999    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:57.435013    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:57.446509    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:57.446519    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:57.457983    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:57.457994    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:57.482849    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:57.482861    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:57.494331    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:57.494343    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:57.533449    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:57.533458    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:57.547803    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:57.547814    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:57.561688    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:57.561699    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:57.577887    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:57.577898    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:57.605592    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:57.605604    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:59.803866    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:00.125195    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:04.806058    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:04.806227    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:04.820630    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:04.820727    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:04.833722    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:04.833807    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:04.847252    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:04.847337    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:04.860974    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:04.861061    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:04.871542    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:04.871624    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:04.882201    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:04.882278    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:04.892525    4326 logs.go:276] 0 containers: []
	W0831 15:59:04.892536    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:04.892602    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:04.902852    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:04.902872    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:04.902877    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:04.937713    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:04.937724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:04.949337    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:04.949348    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:04.963592    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:04.963601    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:04.976351    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:04.976362    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:04.987950    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:04.987959    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:05.005846    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:05.005857    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:05.017224    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:05.017238    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:05.022600    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:05.022608    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:05.057234    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:05.057247    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:05.081024    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:05.081033    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:05.095125    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:05.095140    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:05.106596    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:05.106608    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:05.124474    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:05.124483    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:05.139678    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:05.139687    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:07.652848    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:05.127331    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:05.127439    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:05.138701    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:05.138774    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:05.150416    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:05.150487    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:05.161376    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:05.161445    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:05.172370    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:05.172435    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:05.186411    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:05.186469    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:05.196708    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:05.196768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:05.207000    4510 logs.go:276] 0 containers: []
	W0831 15:59:05.207009    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:05.207062    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:05.217591    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:05.217609    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:05.217615    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:05.243556    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:05.243565    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:05.257569    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:05.257583    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:05.273514    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:05.273524    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:05.285665    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:05.285675    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:05.303356    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:05.303366    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:05.314450    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:05.314461    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:05.326064    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:05.326076    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:05.365768    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:05.365777    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:05.399773    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:05.399784    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:05.416586    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:05.416597    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:05.440125    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:05.440135    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:05.454613    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:05.454624    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:05.465891    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:05.465903    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:05.481484    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:05.481495    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:05.498949    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:05.498962    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:05.502994    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:05.503000    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:08.018977    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:12.655041    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:12.655151    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:12.668725    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:12.668801    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:12.679673    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:12.679736    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:12.690540    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:12.690612    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:12.701992    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:12.702058    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:12.712963    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:12.713040    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:12.726705    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:12.726767    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:12.737550    4326 logs.go:276] 0 containers: []
	W0831 15:59:12.737563    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:12.737621    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:12.748002    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:12.748020    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:12.748028    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:12.759765    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:12.759778    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:12.775816    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:12.775827    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:12.788148    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:12.788158    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:12.799954    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:12.799965    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:12.832256    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:12.832271    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:12.836836    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:12.836845    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:12.854273    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:12.854286    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:12.868867    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:12.868877    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:12.880885    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:12.880899    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:12.892690    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:12.892703    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:12.917732    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:12.917743    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:12.929790    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:12.929803    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:12.941595    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:12.941607    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:12.977592    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:12.977604    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:13.020648    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:13.020739    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:13.031947    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:13.032020    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:13.042522    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:13.042592    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:13.053012    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:13.053068    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:13.064014    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:13.064076    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:13.075093    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:13.075159    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:13.085506    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:13.085566    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:13.095824    4510 logs.go:276] 0 containers: []
	W0831 15:59:13.095836    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:13.095898    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:13.106331    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:13.106351    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:13.106357    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:13.117848    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:13.117860    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:13.133302    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:13.133315    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:13.145500    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:13.145513    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:13.157543    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:13.157556    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:13.168818    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:13.168828    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:13.193399    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:13.193412    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:13.231180    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:13.231191    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:13.246631    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:13.246642    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:13.264523    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:13.264534    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:13.282052    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:13.282061    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:13.299424    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:13.299435    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:13.311259    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:13.311270    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:13.350862    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:13.350870    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:13.355138    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:13.355146    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:13.380927    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:13.380942    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:13.394525    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:13.394535    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:15.495561    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:15.909090    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:20.496807    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:20.497003    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:20.517749    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:20.517854    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:20.532259    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:20.532331    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:20.546109    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:20.546179    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:20.557240    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:20.557312    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:20.573005    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:20.573074    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:20.583373    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:20.583442    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:20.593339    4326 logs.go:276] 0 containers: []
	W0831 15:59:20.593348    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:20.593402    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:20.605873    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:20.605890    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:20.605896    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:20.617713    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:20.617724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:20.629340    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:20.629352    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:20.645716    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:20.645727    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:20.657262    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:20.657276    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:20.672536    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:20.672547    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:20.688390    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:20.688399    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:20.706262    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:20.706276    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:20.739286    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:20.739294    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:20.756322    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:20.756331    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:20.768439    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:20.768449    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:20.780274    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:20.780285    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:20.815699    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:20.815711    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:20.836984    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:20.836996    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:20.862543    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:20.862554    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:23.369188    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:20.911264    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:20.911354    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:20.923428    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:20.923499    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:20.933659    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:20.933721    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:20.944489    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:20.944562    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:20.959323    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:20.959387    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:20.969572    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:20.969635    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:20.980111    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:20.980185    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:20.994967    4510 logs.go:276] 0 containers: []
	W0831 15:59:20.994979    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:20.995038    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:21.005426    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:21.005443    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:21.005448    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:21.023428    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:21.023440    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:21.056746    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:21.056758    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:21.068137    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:21.068148    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:21.094053    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:21.094063    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:21.108027    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:21.108038    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:21.125492    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:21.125503    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:21.137700    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:21.137714    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:21.148716    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:21.148727    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:21.160583    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:21.160594    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:21.165175    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:21.165182    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:21.182309    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:21.182322    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:21.197271    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:21.197281    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:21.208938    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:21.208948    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:21.221000    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:21.221011    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:21.245654    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:21.245662    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:21.284908    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:21.284918    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:23.800442    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:28.371420    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:28.371696    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:28.397959    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:28.398075    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:28.420558    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:28.420633    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:28.433072    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:28.433148    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:28.448354    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:28.448418    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:28.463822    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:28.463889    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:28.474581    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:28.474648    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:28.484804    4326 logs.go:276] 0 containers: []
	W0831 15:59:28.484814    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:28.484868    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:28.495104    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:28.495120    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:28.495126    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:28.509583    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:28.509595    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:28.521960    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:28.521970    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:28.546868    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:28.546886    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:28.559835    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:28.559850    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:28.571607    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:28.571620    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:28.594815    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:28.594828    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:28.617105    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:28.617119    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:28.642104    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:28.642117    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:28.802552    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:28.802703    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:28.813490    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:28.813557    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:28.823962    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:28.824033    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:28.834478    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:28.834546    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:28.844840    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:28.844907    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:28.855197    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:28.855272    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:28.866106    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:28.866171    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:28.879582    4510 logs.go:276] 0 containers: []
	W0831 15:59:28.879594    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:28.879650    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:28.890330    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:28.890347    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:28.890352    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:28.904212    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:28.904225    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:28.915966    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:28.915976    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:28.927852    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:28.927866    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:28.966558    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:28.966569    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:28.999941    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:28.999955    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:29.014353    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:29.014366    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:29.026475    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:29.026486    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:29.043253    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:29.043263    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:29.067167    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:29.067177    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:29.071387    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:29.071395    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:29.088965    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:29.088979    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:29.114516    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:29.114528    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:29.131823    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:29.131837    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:29.142664    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:29.142675    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:29.159251    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:29.159264    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:29.178270    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:29.178280    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:28.675544    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:28.675555    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:28.710081    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:28.710092    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:28.731883    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:28.731899    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:28.748651    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:28.748663    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:28.753804    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:28.753812    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:28.768113    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:28.768126    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:31.285323    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:31.691551    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:36.286907    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:36.287205    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:36.313058    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:36.313171    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:36.333074    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:36.333152    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:36.347984    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:36.348060    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:36.363663    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:36.363735    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:36.374694    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:36.374754    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:36.384857    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:36.384925    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:36.395199    4326 logs.go:276] 0 containers: []
	W0831 15:59:36.395208    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:36.395256    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:36.405420    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:36.405444    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:36.405449    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:36.420379    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:36.420391    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:36.432086    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:36.432097    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:36.443938    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:36.443948    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:36.476478    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:36.476490    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:36.488202    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:36.488213    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:36.503023    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:36.503032    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:36.527015    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:36.527024    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:36.531447    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:36.531455    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:36.542741    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:36.542753    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:36.554169    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:36.554181    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:36.577851    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:36.577860    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:36.614010    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:36.614021    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:36.632676    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:36.632690    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:36.644347    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:36.644359    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:36.693657    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:36.693758    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:36.709954    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:36.710029    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:36.720634    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:36.720707    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:36.731609    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:36.731681    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:36.742036    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:36.742102    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:36.752910    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:36.752986    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:36.764130    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:36.764193    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:36.779051    4510 logs.go:276] 0 containers: []
	W0831 15:59:36.779063    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:36.779123    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:36.789729    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:36.789751    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:36.789757    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:36.807509    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:36.807524    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:36.828837    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:36.828848    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:36.847609    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:36.847622    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:36.858984    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:36.858994    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:36.863225    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:36.863233    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:36.890524    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:36.890535    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:36.904022    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:36.904035    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:36.921629    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:36.921639    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:36.933595    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:36.933607    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:36.946154    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:36.946165    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:36.957396    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:36.957408    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:36.971367    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:36.971376    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:36.982270    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:36.982279    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:36.993615    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:36.993624    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:37.015663    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:37.015673    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:37.052375    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:37.052384    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:39.159168    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:39.587684    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:44.161760    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:44.161922    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:44.176887    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:44.176960    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:44.188526    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:44.188595    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:44.199110    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:44.199176    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:44.209483    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:44.209545    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:44.219867    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:44.219939    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:44.241343    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:44.241403    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:44.252277    4326 logs.go:276] 0 containers: []
	W0831 15:59:44.252288    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:44.252340    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:44.262685    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:44.262700    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:44.262706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:44.277741    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:44.277755    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:44.297712    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:44.297724    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:44.312731    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:44.312744    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:44.324979    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:44.324989    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:44.358149    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:44.358162    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:44.372202    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:44.372215    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:44.384491    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:44.384500    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:44.395974    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:44.395987    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:44.407222    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:44.407232    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:44.419143    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:44.419156    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:44.444191    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:44.444199    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:44.456632    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:44.456642    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:44.461630    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:44.461636    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:44.497727    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:44.497740    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:47.018625    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:44.590079    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:44.590171    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:44.601654    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:44.601728    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:44.616098    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:44.616166    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:44.627146    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:44.627203    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:44.638470    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:44.638539    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:44.649007    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:44.649065    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:44.662641    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:44.662715    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:44.673124    4510 logs.go:276] 0 containers: []
	W0831 15:59:44.673134    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:44.673189    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:44.683999    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:44.684021    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:44.684029    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:44.700466    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:44.700477    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:44.715691    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:44.715705    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:44.727484    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:44.727496    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:44.751892    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:44.751904    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:44.765034    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:44.765045    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:44.800232    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:44.800244    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:44.826712    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:44.826724    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:44.841121    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:44.841135    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:44.856310    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:44.856323    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:44.868606    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:44.868619    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:44.880525    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:44.880535    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:44.918051    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:44.918061    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:44.922111    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:44.922119    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:44.935877    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:44.935893    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:44.953132    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:44.953142    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:44.971216    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:44.971227    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:47.483712    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:52.019344    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:52.019503    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:52.031875    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:52.031953    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:52.042166    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:52.042239    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:52.052564    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:52.052636    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:52.063703    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:52.063771    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:52.074158    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:52.074225    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:52.083850    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:52.083912    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:52.094090    4326 logs.go:276] 0 containers: []
	W0831 15:59:52.094105    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:52.094168    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:52.104617    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:52.104636    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 15:59:52.104642    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 15:59:52.116640    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:52.116653    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 15:59:52.133619    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:52.133628    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:52.167477    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:52.167487    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:52.171951    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 15:59:52.171958    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 15:59:52.183639    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 15:59:52.183649    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 15:59:52.198248    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 15:59:52.198258    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 15:59:52.209642    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:52.209653    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:52.245300    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 15:59:52.245313    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 15:59:52.259758    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 15:59:52.259772    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 15:59:52.271333    4326 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:52.271345    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:52.294593    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 15:59:52.294601    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 15:59:52.315589    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 15:59:52.315604    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 15:59:52.327019    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 15:59:52.327031    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 15:59:52.338644    4326 logs.go:123] Gathering logs for container status ...
	I0831 15:59:52.338655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:52.485922    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:52.486081    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:52.497376    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:52.497443    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:52.507757    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:52.507819    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:52.518082    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:52.518152    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:52.528655    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:52.528716    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:52.539175    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:52.539250    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:52.549452    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:52.549513    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:52.562922    4510 logs.go:276] 0 containers: []
	W0831 15:59:52.562933    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:52.562985    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:52.573598    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:52.573613    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:52.573617    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:52.599174    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:52.599187    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:52.613580    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:52.613591    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:52.624591    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:52.624603    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:52.636207    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:52.636219    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:52.647415    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:52.647428    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:52.683490    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:52.683504    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:52.698771    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:52.698785    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:52.711471    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:52.711482    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:52.723384    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:52.723395    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:52.747927    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:52.747945    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:52.762222    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:52.762236    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:52.779610    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:52.779619    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:52.794279    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:52.794292    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:52.806423    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:52.806433    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:52.844145    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:52.844156    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:52.848847    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:52.848853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:54.852340    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:55.369048    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:59.854422    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:59.854598    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:59.875751    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 15:59:59.875837    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:59.887848    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 15:59:59.887924    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:59.899788    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 15:59:59.899855    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:59.910262    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 15:59:59.910336    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:59.921447    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 15:59:59.921514    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:59.932074    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 15:59:59.932140    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:59.942318    4326 logs.go:276] 0 containers: []
	W0831 15:59:59.942329    4326 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:59.942383    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:59.952481    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 15:59:59.952497    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:59.952503    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:59.957647    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:59.957655    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:59.991779    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 15:59:59.991790    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:00.009559    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:00.009570    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:00.021454    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:00.021466    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:00.053908    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:00.053919    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:00.065777    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:00.065790    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:00.080178    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:00.080190    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:00.091393    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:00.091406    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:00.102962    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:00.102974    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:00.114417    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:00.114427    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:00.128359    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:00.128369    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:00.140208    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:00.140221    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:00.155107    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:00.155116    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:00.167137    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:00.167148    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:02.691460    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:00.371208    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:00.371309    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:00.386619    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 16:00:00.386693    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:00.396982    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 16:00:00.397052    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:00.407885    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 16:00:00.407951    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:00.418334    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 16:00:00.418394    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:00.428852    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 16:00:00.428921    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:00.443166    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 16:00:00.443236    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:00.453104    4510 logs.go:276] 0 containers: []
	W0831 16:00:00.453117    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:00.453170    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:00.463955    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 16:00:00.463971    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:00.463977    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:00.468543    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 16:00:00.468550    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 16:00:00.486484    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:00.486495    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:00.507681    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:00:00.507690    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:00.520150    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:00.520161    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:00.557279    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 16:00:00.557291    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 16:00:00.573577    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 16:00:00.573587    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 16:00:00.592991    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 16:00:00.593002    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 16:00:00.606713    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 16:00:00.606725    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 16:00:00.632231    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 16:00:00.632240    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 16:00:00.647757    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 16:00:00.647770    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 16:00:00.660326    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 16:00:00.660337    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 16:00:00.672085    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 16:00:00.672096    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 16:00:00.684482    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:00.684492    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:00.725611    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 16:00:00.725630    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 16:00:00.739961    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 16:00:00.739972    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 16:00:00.751371    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 16:00:00.751385    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 16:00:03.270086    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:07.693252    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:07.693366    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:07.705124    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:07.705202    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:07.719146    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:07.719214    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:07.730543    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:07.730613    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:07.740730    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:07.740804    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:07.751577    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:07.751645    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:07.762084    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:07.762156    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:07.772382    4326 logs.go:276] 0 containers: []
	W0831 16:00:07.772393    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:07.772453    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:07.786559    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:07.786577    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:07.786583    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:07.822356    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:07.822367    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:07.859324    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:07.859343    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:07.877759    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:07.877770    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:07.892994    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:07.893005    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:07.904694    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:07.904709    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:07.939592    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:07.939603    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:07.953232    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:07.953242    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:07.965058    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:07.965067    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:07.989995    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:07.990005    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:08.008163    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:08.008173    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:08.019672    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:08.019681    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:08.031254    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:08.031265    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:08.048446    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:08.048459    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:08.053420    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:08.053427    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:08.272231    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:08.272370    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:08.286011    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 16:00:08.286084    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:08.297107    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 16:00:08.297191    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:08.307613    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 16:00:08.307677    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:08.322238    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 16:00:08.322302    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:08.332702    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 16:00:08.332768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:08.343571    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 16:00:08.343631    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:08.354181    4510 logs.go:276] 0 containers: []
	W0831 16:00:08.354193    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:08.354247    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:08.365581    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 16:00:08.365599    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:08.365605    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:08.369611    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 16:00:08.369617    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 16:00:08.381061    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 16:00:08.381072    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 16:00:08.398077    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 16:00:08.398089    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 16:00:08.410398    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 16:00:08.410409    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 16:00:08.430625    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 16:00:08.430636    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 16:00:08.444109    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:00:08.444119    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:08.456020    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:08.456031    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:08.494763    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 16:00:08.494774    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 16:00:08.509197    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 16:00:08.509206    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 16:00:08.520982    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 16:00:08.520994    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 16:00:08.535763    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 16:00:08.535776    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 16:00:08.546918    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:08.546930    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:08.582333    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 16:00:08.582344    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 16:00:08.596745    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 16:00:08.596756    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 16:00:08.624091    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 16:00:08.624101    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 16:00:08.641214    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:08.641227    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:10.565750    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:11.165705    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:16.167836    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:16.167920    4510 kubeadm.go:597] duration metric: took 4m3.618059958s to restartPrimaryControlPlane
	W0831 16:00:16.167979    4510 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0831 16:00:16.168002    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0831 16:00:17.212590    4510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04460775s)
	I0831 16:00:17.212645    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 16:00:17.217753    4510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 16:00:17.220481    4510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 16:00:17.223463    4510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 16:00:17.223470    4510 kubeadm.go:157] found existing configuration files:
	
	I0831 16:00:17.223496    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf
	I0831 16:00:17.226375    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 16:00:17.226409    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 16:00:17.229113    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf
	I0831 16:00:17.231835    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 16:00:17.231858    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 16:00:17.235108    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf
	I0831 16:00:17.237770    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 16:00:17.237793    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 16:00:17.240176    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf
	I0831 16:00:17.243091    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 16:00:17.243111    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 16:00:17.245892    4510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 16:00:17.262440    4510 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0831 16:00:17.262469    4510 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 16:00:17.314827    4510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 16:00:17.314901    4510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 16:00:17.314951    4510 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 16:00:17.364372    4510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 16:00:17.368569    4510 out.go:235]   - Generating certificates and keys ...
	I0831 16:00:17.368606    4510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 16:00:17.368656    4510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 16:00:17.368700    4510 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 16:00:17.368731    4510 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 16:00:17.368774    4510 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 16:00:17.368808    4510 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 16:00:17.368849    4510 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 16:00:17.368883    4510 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 16:00:17.368921    4510 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 16:00:17.368958    4510 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 16:00:17.368978    4510 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 16:00:17.369010    4510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 16:00:17.440132    4510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 16:00:17.496069    4510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 16:00:17.559948    4510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 16:00:17.647519    4510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 16:00:17.680302    4510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 16:00:17.680666    4510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 16:00:17.680783    4510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 16:00:17.749643    4510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 16:00:15.568172    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:15.568547    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:15.596058    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:15.596195    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:15.617376    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:15.617456    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:15.632030    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:15.632108    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:15.642934    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:15.642991    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:15.652965    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:15.653021    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:15.663681    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:15.663754    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:15.675557    4326 logs.go:276] 0 containers: []
	W0831 16:00:15.675568    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:15.675631    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:15.686134    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:15.686151    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:15.686157    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:15.701299    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:15.701310    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:15.713316    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:15.713326    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:15.727642    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:15.727663    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:15.733013    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:15.733022    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:15.744695    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:15.744706    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:15.756758    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:15.756768    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:15.775841    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:15.775852    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:15.799062    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:15.799070    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:15.830718    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:15.830727    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:15.844669    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:15.844681    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:15.856369    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:15.856380    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:15.870649    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:15.870661    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:15.882483    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:15.882497    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:15.918112    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:15.918123    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:18.438178    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:17.756790    4510 out.go:235]   - Booting up control plane ...
	I0831 16:00:17.756838    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 16:00:17.756907    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 16:00:17.756981    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 16:00:17.757026    4510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 16:00:17.757114    4510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 16:00:22.258673    4510 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503158 seconds
	I0831 16:00:22.258821    4510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 16:00:22.262882    4510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 16:00:22.781923    4510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 16:00:22.782250    4510 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-231000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 16:00:23.287616    4510 kubeadm.go:310] [bootstrap-token] Using token: runj8w.lv76pt49rlz5uirc
	I0831 16:00:23.440200    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:23.440303    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:23.456239    4326 logs.go:276] 1 containers: [8d5cc5198109]
	I0831 16:00:23.456314    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:23.467574    4326 logs.go:276] 1 containers: [84b40242e7f4]
	I0831 16:00:23.467648    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:23.479236    4326 logs.go:276] 4 containers: [966afe209f02 f231366b3e30 ba21391de275 607c83286101]
	I0831 16:00:23.479320    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:23.491505    4326 logs.go:276] 1 containers: [51d2d012678c]
	I0831 16:00:23.491577    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:23.502430    4326 logs.go:276] 1 containers: [8e3588246863]
	I0831 16:00:23.502500    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:23.518349    4326 logs.go:276] 1 containers: [76e971d7d627]
	I0831 16:00:23.518418    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:23.530486    4326 logs.go:276] 0 containers: []
	W0831 16:00:23.530500    4326 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:23.530561    4326 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:23.541750    4326 logs.go:276] 1 containers: [38af8b85242a]
	I0831 16:00:23.541771    4326 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:23.541778    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:23.546530    4326 logs.go:123] Gathering logs for etcd [84b40242e7f4] ...
	I0831 16:00:23.546542    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84b40242e7f4"
	I0831 16:00:23.564371    4326 logs.go:123] Gathering logs for coredns [966afe209f02] ...
	I0831 16:00:23.564388    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966afe209f02"
	I0831 16:00:23.577806    4326 logs.go:123] Gathering logs for coredns [f231366b3e30] ...
	I0831 16:00:23.577817    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f231366b3e30"
	I0831 16:00:23.589898    4326 logs.go:123] Gathering logs for kube-scheduler [51d2d012678c] ...
	I0831 16:00:23.589912    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d2d012678c"
	I0831 16:00:23.607011    4326 logs.go:123] Gathering logs for kube-apiserver [8d5cc5198109] ...
	I0831 16:00:23.607023    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5cc5198109"
	I0831 16:00:23.621659    4326 logs.go:123] Gathering logs for coredns [ba21391de275] ...
	I0831 16:00:23.621672    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba21391de275"
	I0831 16:00:23.634898    4326 logs.go:123] Gathering logs for coredns [607c83286101] ...
	I0831 16:00:23.634909    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 607c83286101"
	I0831 16:00:23.646669    4326 logs.go:123] Gathering logs for kube-controller-manager [76e971d7d627] ...
	I0831 16:00:23.646679    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e971d7d627"
	I0831 16:00:23.291806    4510 out.go:235]   - Configuring RBAC rules ...
	I0831 16:00:23.291893    4510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 16:00:23.294852    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 16:00:23.300288    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 16:00:23.301393    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 16:00:23.302397    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 16:00:23.303510    4510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 16:00:23.308494    4510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 16:00:23.461883    4510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 16:00:23.697931    4510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 16:00:23.698274    4510 kubeadm.go:310] 
	I0831 16:00:23.698312    4510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 16:00:23.698315    4510 kubeadm.go:310] 
	I0831 16:00:23.698352    4510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 16:00:23.698354    4510 kubeadm.go:310] 
	I0831 16:00:23.698367    4510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 16:00:23.698398    4510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 16:00:23.698427    4510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 16:00:23.698430    4510 kubeadm.go:310] 
	I0831 16:00:23.698495    4510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 16:00:23.698500    4510 kubeadm.go:310] 
	I0831 16:00:23.698529    4510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 16:00:23.698532    4510 kubeadm.go:310] 
	I0831 16:00:23.698582    4510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 16:00:23.698630    4510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 16:00:23.698671    4510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 16:00:23.698675    4510 kubeadm.go:310] 
	I0831 16:00:23.698816    4510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 16:00:23.698864    4510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 16:00:23.698871    4510 kubeadm.go:310] 
	I0831 16:00:23.698921    4510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token runj8w.lv76pt49rlz5uirc \
	I0831 16:00:23.698999    4510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 \
	I0831 16:00:23.699015    4510 kubeadm.go:310] 	--control-plane 
	I0831 16:00:23.699017    4510 kubeadm.go:310] 
	I0831 16:00:23.699135    4510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 16:00:23.699143    4510 kubeadm.go:310] 
	I0831 16:00:23.699187    4510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token runj8w.lv76pt49rlz5uirc \
	I0831 16:00:23.699249    4510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 
	I0831 16:00:23.699444    4510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 16:00:23.699600    4510 cni.go:84] Creating CNI manager for ""
	I0831 16:00:23.699611    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:00:23.702744    4510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 16:00:23.709885    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 16:00:23.713613    4510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 16:00:23.719551    4510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 16:00:23.719663    4510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 16:00:23.719719    4510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-231000 minikube.k8s.io/updated_at=2024_08_31T16_00_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=stopped-upgrade-231000 minikube.k8s.io/primary=true
	I0831 16:00:23.733961    4510 ops.go:34] apiserver oom_adj: -16
	I0831 16:00:23.772037    4510 kubeadm.go:1113] duration metric: took 52.460208ms to wait for elevateKubeSystemPrivileges
	I0831 16:00:23.772139    4510 kubeadm.go:394] duration metric: took 4m11.237113417s to StartCluster
	I0831 16:00:23.772155    4510 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:00:23.772255    4510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:00:23.772712    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:00:23.772912    4510 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:00:23.772963    4510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 16:00:23.773028    4510 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-231000"
	I0831 16:00:23.773039    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:00:23.773042    4510 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-231000"
	W0831 16:00:23.773045    4510 addons.go:243] addon storage-provisioner should already be in state true
	I0831 16:00:23.773056    4510 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0831 16:00:23.773072    4510 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-231000"
	I0831 16:00:23.773084    4510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-231000"
	I0831 16:00:23.776853    4510 out.go:177] * Verifying Kubernetes components...
	I0831 16:00:23.777519    4510 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106638030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 16:00:23.781275    4510 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-231000"
	W0831 16:00:23.781281    4510 addons.go:243] addon default-storageclass should already be in state true
	I0831 16:00:23.781292    4510 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0831 16:00:23.781879    4510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 16:00:23.781885    4510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 16:00:23.781890    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 16:00:23.784811    4510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 16:00:23.788841    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 16:00:23.792764    4510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 16:00:23.792770    4510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 16:00:23.792776    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 16:00:23.859455    4510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 16:00:23.865156    4510 api_server.go:52] waiting for apiserver process to appear ...
	I0831 16:00:23.865205    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 16:00:23.868947    4510 api_server.go:72] duration metric: took 96.025958ms to wait for apiserver process to appear ...
	I0831 16:00:23.868955    4510 api_server.go:88] waiting for apiserver healthz status ...
	I0831 16:00:23.868961    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:23.886577    4510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 16:00:23.897246    4510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 16:00:24.229746    4510 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 16:00:24.229758    4510 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 16:00:23.665302    4326 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:23.665313    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:23.689144    4326 logs.go:123] Gathering logs for container status ...
	I0831 16:00:23.689159    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:23.702978    4326 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:23.702988    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:23.740596    4326 logs.go:123] Gathering logs for kube-proxy [8e3588246863] ...
	I0831 16:00:23.740606    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e3588246863"
	I0831 16:00:23.753803    4326 logs.go:123] Gathering logs for storage-provisioner [38af8b85242a] ...
	I0831 16:00:23.753815    4326 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38af8b85242a"
	I0831 16:00:23.766968    4326 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:23.766982    4326 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:26.304167    4326 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:28.870632    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:28.870703    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:31.306676    4326 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:31.311276    4326 out.go:201] 
	W0831 16:00:31.314058    4326 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0831 16:00:31.314067    4326 out.go:270] * 
	W0831 16:00:31.314784    4326 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:00:31.330059    4326 out.go:201] 
	I0831 16:00:33.870881    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:33.870925    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:38.871062    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:38.871103    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:43.871316    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:43.871340    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sat 2024-08-31 22:51:44 UTC, ends at Sat 2024-08-31 23:00:47 UTC. --
	Aug 31 23:00:31 running-upgrade-137000 dockerd[3180]: time="2024-08-31T23:00:31.752982493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 31 23:00:31 running-upgrade-137000 dockerd[3180]: time="2024-08-31T23:00:31.753118196Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/aacc6d0cff457519323606b17447943ec2e337f31fcb9ad4d5291fb533f587ba pid=19135 runtime=io.containerd.runc.v2
	Aug 31 23:00:32 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:32Z" level=error msg="ContainerStats resp: {0x40006731c0 linux}"
	Aug 31 23:00:32 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:32Z" level=error msg="ContainerStats resp: {0x4000673580 linux}"
	Aug 31 23:00:33 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:33Z" level=error msg="ContainerStats resp: {0x40000b9c40 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x4000902040 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x400045b280 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x400035b440 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x400045a680 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x400045bbc0 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x400045bf80 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=error msg="ContainerStats resp: {0x40006eedc0 linux}"
	Aug 31 23:00:34 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 31 23:00:39 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 31 23:00:44 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:44Z" level=error msg="ContainerStats resp: {0x400007fec0 linux}"
	Aug 31 23:00:44 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:44Z" level=error msg="ContainerStats resp: {0x4000902c00 linux}"
	Aug 31 23:00:44 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 31 23:00:45 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:45Z" level=error msg="ContainerStats resp: {0x40006d2e40 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x40008d9c80 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x40008d9dc0 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x40006ee000 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x400035bdc0 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x400045a200 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x400045a780 linux}"
	Aug 31 23:00:46 running-upgrade-137000 cri-dockerd[3008]: time="2024-08-31T23:00:46Z" level=error msg="ContainerStats resp: {0x400045a940 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4d2aaebb9807e       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   d89281e96286e
	aacc6d0cff457       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   51c6b1561f521
	966afe209f022       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d89281e96286e
	f231366b3e30c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   51c6b1561f521
	38af8b85242a3       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b435021574349
	8e3588246863c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   8ca7e379b26b8
	84b40242e7f43       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   ea01bf78c7d45
	76e971d7d627c       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   452a0a4d0a1f4
	8d5cc51981098       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   b4adcb01d6edf
	51d2d012678ca       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   071c98d7933f6
	
	
	==> coredns [4d2aaebb9807] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5053634251068970836.1158008877513919763. HINFO: read udp 10.244.0.3:40563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5053634251068970836.1158008877513919763. HINFO: read udp 10.244.0.3:43159->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5053634251068970836.1158008877513919763. HINFO: read udp 10.244.0.3:32895->10.0.2.3:53: i/o timeout
	
	
	==> coredns [966afe209f02] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:54671->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:35987->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:43978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:60757->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:39053->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:58681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:50571->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:36615->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:44308->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2523341400524168008.7797300273854397958. HINFO: read udp 10.244.0.3:53101->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aacc6d0cff45] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8744214259924179161.8784697833983221259. HINFO: read udp 10.244.0.2:46507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8744214259924179161.8784697833983221259. HINFO: read udp 10.244.0.2:56364->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8744214259924179161.8784697833983221259. HINFO: read udp 10.244.0.2:50338->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f231366b3e30] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:44646->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:42884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:60285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:40246->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:32807->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:39563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:34466->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:37730->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:58139->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7837359678842133157.470137965967507001. HINFO: read udp 10.244.0.2:54345->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-137000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-137000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=running-upgrade-137000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T15_56_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:56:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-137000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:00:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:56:30 +0000   Sat, 31 Aug 2024 22:56:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:56:30 +0000   Sat, 31 Aug 2024 22:56:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:56:30 +0000   Sat, 31 Aug 2024 22:56:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:56:30 +0000   Sat, 31 Aug 2024 22:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-137000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bedfb8f76003490b87e0941934f512c3
	  System UUID:                bedfb8f76003490b87e0941934f512c3
	  Boot ID:                    3c2b8320-aa7c-4772-bcdb-379aa6e8910a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bxrmh                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-pvssw                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-137000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-137000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-137000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-mscgc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-137000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-137000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-137000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-137000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-137000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-137000 event: Registered Node running-upgrade-137000 in Controller
	
	
	==> dmesg <==
	[  +1.683220] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.058965] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.062303] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.134038] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.062414] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.058065] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.346862] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Aug31 22:52] systemd-fstab-generator[1714]: Ignoring "noauto" for root device
	[  +3.105370] systemd-fstab-generator[2166]: Ignoring "noauto" for root device
	[  +0.128584] systemd-fstab-generator[2200]: Ignoring "noauto" for root device
	[  +0.080424] systemd-fstab-generator[2211]: Ignoring "noauto" for root device
	[  +0.086004] systemd-fstab-generator[2224]: Ignoring "noauto" for root device
	[  +3.132248] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.221661] systemd-fstab-generator[2963]: Ignoring "noauto" for root device
	[  +0.068072] systemd-fstab-generator[2976]: Ignoring "noauto" for root device
	[  +0.060674] systemd-fstab-generator[2987]: Ignoring "noauto" for root device
	[  +0.075708] systemd-fstab-generator[3001]: Ignoring "noauto" for root device
	[  +2.278790] systemd-fstab-generator[3160]: Ignoring "noauto" for root device
	[  +3.251727] systemd-fstab-generator[3571]: Ignoring "noauto" for root device
	[  +1.651328] systemd-fstab-generator[4090]: Ignoring "noauto" for root device
	[ +20.457274] kauditd_printk_skb: 68 callbacks suppressed
	[Aug31 22:53] kauditd_printk_skb: 21 callbacks suppressed
	[Aug31 22:56] systemd-fstab-generator[12273]: Ignoring "noauto" for root device
	[  +5.611312] systemd-fstab-generator[12860]: Ignoring "noauto" for root device
	[  +0.469118] systemd-fstab-generator[12990]: Ignoring "noauto" for root device
	
	
	==> etcd [84b40242e7f4] <==
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-31T22:56:26.166Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-31T22:56:26.295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:56:26.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-31T22:56:26.300Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-137000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:56:26.300Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:56:26.300Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-31T22:56:26.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:56:26.301Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:56:26.301Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:56:26.303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:56:26.304Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:56:26.320Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:56:26.320Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:56:26.320Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:00:47 up 9 min,  0 users,  load average: 0.04, 0.21, 0.14
	Linux running-upgrade-137000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8d5cc5198109] <==
	I0831 22:56:27.689118       1 cache.go:39] Caches are synced for autoregister controller
	I0831 22:56:27.689270       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 22:56:27.689309       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0831 22:56:27.689351       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0831 22:56:27.689739       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0831 22:56:27.690479       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0831 22:56:27.704570       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0831 22:56:28.424041       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0831 22:56:28.592895       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0831 22:56:28.594676       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0831 22:56:28.594727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0831 22:56:28.723420       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0831 22:56:28.738477       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0831 22:56:28.755326       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0831 22:56:28.757360       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0831 22:56:28.757771       1 controller.go:611] quota admission added evaluator for: endpoints
	I0831 22:56:28.759045       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0831 22:56:29.745318       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0831 22:56:30.317085       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0831 22:56:30.322581       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0831 22:56:30.348826       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0831 22:56:30.393085       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 22:56:43.351123       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0831 22:56:43.500761       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0831 22:56:43.866465       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [76e971d7d627] <==
	I0831 22:56:42.675084       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0831 22:56:42.690471       1 shared_informer.go:262] Caches are synced for PVC protection
	I0831 22:56:42.695622       1 shared_informer.go:262] Caches are synced for stateful set
	I0831 22:56:42.695632       1 shared_informer.go:262] Caches are synced for deployment
	I0831 22:56:42.698609       1 shared_informer.go:262] Caches are synced for PV protection
	I0831 22:56:42.699754       1 shared_informer.go:262] Caches are synced for attach detach
	I0831 22:56:42.745661       1 shared_informer.go:262] Caches are synced for expand
	I0831 22:56:42.745806       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0831 22:56:42.766274       1 shared_informer.go:262] Caches are synced for taint
	I0831 22:56:42.766307       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0831 22:56:42.766325       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-137000. Assuming now as a timestamp.
	I0831 22:56:42.766341       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0831 22:56:42.766361       1 event.go:294] "Event occurred" object="running-upgrade-137000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-137000 event: Registered Node running-upgrade-137000 in Controller"
	I0831 22:56:42.766369       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0831 22:56:42.783117       1 shared_informer.go:262] Caches are synced for resource quota
	I0831 22:56:42.788355       1 shared_informer.go:262] Caches are synced for disruption
	I0831 22:56:42.788378       1 disruption.go:371] Sending events to api server.
	I0831 22:56:42.800360       1 shared_informer.go:262] Caches are synced for resource quota
	I0831 22:56:43.213422       1 shared_informer.go:262] Caches are synced for garbage collector
	I0831 22:56:43.245185       1 shared_informer.go:262] Caches are synced for garbage collector
	I0831 22:56:43.245217       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0831 22:56:43.354910       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mscgc"
	I0831 22:56:43.501928       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0831 22:56:43.600900       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bxrmh"
	I0831 22:56:43.605414       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pvssw"
	
	
	==> kube-proxy [8e3588246863] <==
	I0831 22:56:43.843152       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0831 22:56:43.843210       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0831 22:56:43.843220       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0831 22:56:43.861859       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0831 22:56:43.861872       1 server_others.go:206] "Using iptables Proxier"
	I0831 22:56:43.861886       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0831 22:56:43.861995       1 server.go:661] "Version info" version="v1.24.1"
	I0831 22:56:43.862004       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:56:43.862607       1 config.go:317] "Starting service config controller"
	I0831 22:56:43.862613       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0831 22:56:43.862623       1 config.go:226] "Starting endpoint slice config controller"
	I0831 22:56:43.862625       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0831 22:56:43.862796       1 config.go:444] "Starting node config controller"
	I0831 22:56:43.862798       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0831 22:56:43.962849       1 shared_informer.go:262] Caches are synced for node config
	I0831 22:56:43.962853       1 shared_informer.go:262] Caches are synced for service config
	I0831 22:56:43.962861       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [51d2d012678c] <==
	W0831 22:56:27.658951       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:56:27.658957       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0831 22:56:27.658980       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:56:27.658987       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0831 22:56:27.658999       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:56:27.659002       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0831 22:56:27.659448       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:56:27.659486       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 22:56:27.659617       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:56:27.659658       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0831 22:56:27.659698       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:56:27.659758       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0831 22:56:27.659793       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:56:27.659827       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0831 22:56:28.568268       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:56:28.568328       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0831 22:56:28.602758       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:56:28.602807       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0831 22:56:28.612489       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:56:28.612503       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0831 22:56:28.619052       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:56:28.619110       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0831 22:56:28.654351       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:56:28.654451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0831 22:56:30.457846       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sat 2024-08-31 22:51:44 UTC, ends at Sat 2024-08-31 23:00:47 UTC. --
	Aug 31 22:56:32 running-upgrade-137000 kubelet[12866]: E0831 22:56:32.143194   12866 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-137000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-137000"
	Aug 31 22:56:32 running-upgrade-137000 kubelet[12866]: E0831 22:56:32.343204   12866 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-137000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-137000"
	Aug 31 22:56:32 running-upgrade-137000 kubelet[12866]: I0831 22:56:32.544317   12866 request.go:601] Waited for 1.122195182s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 31 22:56:32 running-upgrade-137000 kubelet[12866]: E0831 22:56:32.547670   12866 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-137000\" already exists" pod="kube-system/etcd-running-upgrade-137000"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: I0831 22:56:42.594622   12866 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: I0831 22:56:42.595356   12866 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: I0831 22:56:42.771231   12866 topology_manager.go:200] "Topology Admit Handler"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: I0831 22:56:42.842859   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4bdeb8c5-6ab5-4d10-84ff-9f365f33496d-tmp\") pod \"storage-provisioner\" (UID: \"4bdeb8c5-6ab5-4d10-84ff-9f365f33496d\") " pod="kube-system/storage-provisioner"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: I0831 22:56:42.842901   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8dz\" (UniqueName: \"kubernetes.io/projected/4bdeb8c5-6ab5-4d10-84ff-9f365f33496d-kube-api-access-fz8dz\") pod \"storage-provisioner\" (UID: \"4bdeb8c5-6ab5-4d10-84ff-9f365f33496d\") " pod="kube-system/storage-provisioner"
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: E0831 22:56:42.946559   12866 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: E0831 22:56:42.946599   12866 projected.go:192] Error preparing data for projected volume kube-api-access-fz8dz for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 31 22:56:42 running-upgrade-137000 kubelet[12866]: E0831 22:56:42.946635   12866 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/4bdeb8c5-6ab5-4d10-84ff-9f365f33496d-kube-api-access-fz8dz podName:4bdeb8c5-6ab5-4d10-84ff-9f365f33496d nodeName:}" failed. No retries permitted until 2024-08-31 22:56:43.446621913 +0000 UTC m=+13.140285354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fz8dz" (UniqueName: "kubernetes.io/projected/4bdeb8c5-6ab5-4d10-84ff-9f365f33496d-kube-api-access-fz8dz") pod "storage-provisioner" (UID: "4bdeb8c5-6ab5-4d10-84ff-9f365f33496d") : configmap "kube-root-ca.crt" not found
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.357516   12866 topology_manager.go:200] "Topology Admit Handler"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.452308   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a963f97-0aa5-4de1-b059-ded4a972a86b-lib-modules\") pod \"kube-proxy-mscgc\" (UID: \"3a963f97-0aa5-4de1-b059-ded4a972a86b\") " pod="kube-system/kube-proxy-mscgc"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.452430   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67pn\" (UniqueName: \"kubernetes.io/projected/3a963f97-0aa5-4de1-b059-ded4a972a86b-kube-api-access-f67pn\") pod \"kube-proxy-mscgc\" (UID: \"3a963f97-0aa5-4de1-b059-ded4a972a86b\") " pod="kube-system/kube-proxy-mscgc"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.452461   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a963f97-0aa5-4de1-b059-ded4a972a86b-kube-proxy\") pod \"kube-proxy-mscgc\" (UID: \"3a963f97-0aa5-4de1-b059-ded4a972a86b\") " pod="kube-system/kube-proxy-mscgc"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.452514   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a963f97-0aa5-4de1-b059-ded4a972a86b-xtables-lock\") pod \"kube-proxy-mscgc\" (UID: \"3a963f97-0aa5-4de1-b059-ded4a972a86b\") " pod="kube-system/kube-proxy-mscgc"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.603608   12866 topology_manager.go:200] "Topology Admit Handler"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.609211   12866 topology_manager.go:200] "Topology Admit Handler"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.754933   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e6fd275-a989-41e2-a722-b76266363551-config-volume\") pod \"coredns-6d4b75cb6d-pvssw\" (UID: \"7e6fd275-a989-41e2-a722-b76266363551\") " pod="kube-system/coredns-6d4b75cb6d-pvssw"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.755239   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzsw7\" (UniqueName: \"kubernetes.io/projected/7e6fd275-a989-41e2-a722-b76266363551-kube-api-access-zzsw7\") pod \"coredns-6d4b75cb6d-pvssw\" (UID: \"7e6fd275-a989-41e2-a722-b76266363551\") " pod="kube-system/coredns-6d4b75cb6d-pvssw"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.755256   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96t2\" (UniqueName: \"kubernetes.io/projected/abd32402-73be-495c-8f03-51b22fc53eda-kube-api-access-m96t2\") pod \"coredns-6d4b75cb6d-bxrmh\" (UID: \"abd32402-73be-495c-8f03-51b22fc53eda\") " pod="kube-system/coredns-6d4b75cb6d-bxrmh"
	Aug 31 22:56:43 running-upgrade-137000 kubelet[12866]: I0831 22:56:43.755267   12866 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abd32402-73be-495c-8f03-51b22fc53eda-config-volume\") pod \"coredns-6d4b75cb6d-bxrmh\" (UID: \"abd32402-73be-495c-8f03-51b22fc53eda\") " pod="kube-system/coredns-6d4b75cb6d-bxrmh"
	Aug 31 23:00:31 running-upgrade-137000 kubelet[12866]: I0831 23:00:31.972878   12866 scope.go:110] "RemoveContainer" containerID="607c83286101dd255239fc42ed7c01b52d604522be060ca52153d6610fdf511b"
	Aug 31 23:00:31 running-upgrade-137000 kubelet[12866]: I0831 23:00:31.986940   12866 scope.go:110] "RemoveContainer" containerID="ba21391de275d9f6b8148d416d991432d46295c9fe1e36e76d337945aa5d4b79"
	
	
	==> storage-provisioner [38af8b85242a] <==
	I0831 22:56:43.886462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:56:43.890765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:56:43.890787       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:56:43.894384       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:56:43.894529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-137000_614228d2-222b-4a8a-8254-9f67162c27c5!
	I0831 22:56:43.895112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b037e2f9-9a7e-4d7a-9c32-8dbc06283465", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-137000_614228d2-222b-4a8a-8254-9f67162c27c5 became leader
	I0831 22:56:43.995014       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-137000_614228d2-222b-4a8a-8254-9f67162c27c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-137000 -n running-upgrade-137000
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-137000 -n running-upgrade-137000: exit status 2 (15.568363208s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "running-upgrade-137000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "running-upgrade-137000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-137000
--- FAIL: TestRunningBinaryUpgrade (585.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.808861166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-584000" primary control-plane node in "kubernetes-upgrade-584000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-584000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:54:17.707972    4420 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:54:17.708102    4420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:54:17.708105    4420 out.go:358] Setting ErrFile to fd 2...
	I0831 15:54:17.708107    4420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:54:17.708235    4420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:54:17.709337    4420 out.go:352] Setting JSON to false
	I0831 15:54:17.725593    4420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3221,"bootTime":1725141636,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:54:17.725668    4420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:54:17.731716    4420 out.go:177] * [kubernetes-upgrade-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:54:17.739628    4420 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:54:17.739688    4420 notify.go:220] Checking for updates...
	I0831 15:54:17.745608    4420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:54:17.748602    4420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:54:17.752657    4420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:54:17.755574    4420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:54:17.758666    4420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:54:17.761942    4420 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:54:17.762004    4420 config.go:182] Loaded profile config "running-upgrade-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:54:17.762047    4420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:54:17.765562    4420 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 15:54:17.772609    4420 start.go:297] selected driver: qemu2
	I0831 15:54:17.772614    4420 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:54:17.772620    4420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:54:17.774725    4420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:54:17.775869    4420 out.go:177] * Automatically selected the socket_vmnet network
	I0831 15:54:17.778669    4420 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:54:17.778682    4420 cni.go:84] Creating CNI manager for ""
	I0831 15:54:17.778689    4420 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 15:54:17.778727    4420 start.go:340] cluster config:
	{Name:kubernetes-upgrade-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:54:17.782109    4420 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:54:17.790559    4420 out.go:177] * Starting "kubernetes-upgrade-584000" primary control-plane node in "kubernetes-upgrade-584000" cluster
	I0831 15:54:17.794596    4420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:54:17.794618    4420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 15:54:17.794627    4420 cache.go:56] Caching tarball of preloaded images
	I0831 15:54:17.794684    4420 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:54:17.794689    4420 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 15:54:17.794738    4420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kubernetes-upgrade-584000/config.json ...
	I0831 15:54:17.794748    4420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kubernetes-upgrade-584000/config.json: {Name:mk3ba3532980fddfc6660c17bc4601f62eb06b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:54:17.795116    4420 start.go:360] acquireMachinesLock for kubernetes-upgrade-584000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:54:17.795151    4420 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "kubernetes-upgrade-584000"
	I0831 15:54:17.795160    4420 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:54:17.795207    4420 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:54:17.803618    4420 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:54:17.820064    4420 start.go:159] libmachine.API.Create for "kubernetes-upgrade-584000" (driver="qemu2")
	I0831 15:54:17.820088    4420 client.go:168] LocalClient.Create starting
	I0831 15:54:17.820148    4420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:54:17.820178    4420 main.go:141] libmachine: Decoding PEM data...
	I0831 15:54:17.820188    4420 main.go:141] libmachine: Parsing certificate...
	I0831 15:54:17.820223    4420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:54:17.820245    4420 main.go:141] libmachine: Decoding PEM data...
	I0831 15:54:17.820252    4420 main.go:141] libmachine: Parsing certificate...
	I0831 15:54:17.820754    4420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:54:17.982298    4420 main.go:141] libmachine: Creating SSH key...
	I0831 15:54:18.011988    4420 main.go:141] libmachine: Creating Disk image...
	I0831 15:54:18.011993    4420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:54:18.012181    4420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:18.021531    4420 main.go:141] libmachine: STDOUT: 
	I0831 15:54:18.021558    4420 main.go:141] libmachine: STDERR: 
	I0831 15:54:18.021626    4420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2 +20000M
	I0831 15:54:18.029662    4420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:54:18.029677    4420 main.go:141] libmachine: STDERR: 
	I0831 15:54:18.029698    4420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:18.029702    4420 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:54:18.029714    4420 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:54:18.029740    4420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4a:db:70:08:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:18.031308    4420 main.go:141] libmachine: STDOUT: 
	I0831 15:54:18.031329    4420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:54:18.031346    4420 client.go:171] duration metric: took 211.277334ms to LocalClient.Create
	I0831 15:54:20.033317    4420 start.go:128] duration metric: took 2.238327416s to createHost
	I0831 15:54:20.033406    4420 start.go:83] releasing machines lock for "kubernetes-upgrade-584000", held for 2.238485542s
	W0831 15:54:20.033454    4420 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:54:20.046275    4420 out.go:177] * Deleting "kubernetes-upgrade-584000" in qemu2 ...
	W0831 15:54:20.075946    4420 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:54:20.075970    4420 start.go:729] Will try again in 5 seconds ...
	I0831 15:54:25.076861    4420 start.go:360] acquireMachinesLock for kubernetes-upgrade-584000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:54:25.077338    4420 start.go:364] duration metric: took 379.125µs to acquireMachinesLock for "kubernetes-upgrade-584000"
	I0831 15:54:25.077398    4420 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 15:54:25.077577    4420 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 15:54:25.087086    4420 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 15:54:25.129067    4420 start.go:159] libmachine.API.Create for "kubernetes-upgrade-584000" (driver="qemu2")
	I0831 15:54:25.129118    4420 client.go:168] LocalClient.Create starting
	I0831 15:54:25.129226    4420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 15:54:25.129295    4420 main.go:141] libmachine: Decoding PEM data...
	I0831 15:54:25.129311    4420 main.go:141] libmachine: Parsing certificate...
	I0831 15:54:25.129376    4420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 15:54:25.129415    4420 main.go:141] libmachine: Decoding PEM data...
	I0831 15:54:25.129427    4420 main.go:141] libmachine: Parsing certificate...
	I0831 15:54:25.129925    4420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 15:54:25.297292    4420 main.go:141] libmachine: Creating SSH key...
	I0831 15:54:25.423252    4420 main.go:141] libmachine: Creating Disk image...
	I0831 15:54:25.423261    4420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 15:54:25.423468    4420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:25.432789    4420 main.go:141] libmachine: STDOUT: 
	I0831 15:54:25.432808    4420 main.go:141] libmachine: STDERR: 
	I0831 15:54:25.432852    4420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2 +20000M
	I0831 15:54:25.441041    4420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 15:54:25.441059    4420 main.go:141] libmachine: STDERR: 
	I0831 15:54:25.441069    4420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:25.441076    4420 main.go:141] libmachine: Starting QEMU VM...
	I0831 15:54:25.441087    4420 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:54:25.441115    4420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9c:18:50:af:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:25.442886    4420 main.go:141] libmachine: STDOUT: 
	I0831 15:54:25.442906    4420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:54:25.442919    4420 client.go:171] duration metric: took 313.821459ms to LocalClient.Create
	I0831 15:54:27.444962    4420 start.go:128] duration metric: took 2.367536417s to createHost
	I0831 15:54:27.445065    4420 start.go:83] releasing machines lock for "kubernetes-upgrade-584000", held for 2.367890583s
	W0831 15:54:27.445424    4420 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:54:27.455069    4420 out.go:201] 
	W0831 15:54:27.465191    4420 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:54:27.465236    4420 out.go:270] * 
	* 
	W0831 15:54:27.468262    4420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:54:27.474928    4420 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-584000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-584000: (3.401256584s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-584000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-584000 status --format={{.Host}}: exit status 7 (64.380042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183574917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-584000" primary control-plane node in "kubernetes-upgrade-584000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:54:30.986810    4454 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:54:30.986940    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:54:30.986943    4454 out.go:358] Setting ErrFile to fd 2...
	I0831 15:54:30.986946    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:54:30.987069    4454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:54:30.988220    4454 out.go:352] Setting JSON to false
	I0831 15:54:31.005181    4454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3234,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:54:31.005256    4454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:54:31.009993    4454 out.go:177] * [kubernetes-upgrade-584000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:54:31.017848    4454 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:54:31.017910    4454 notify.go:220] Checking for updates...
	I0831 15:54:31.025763    4454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:54:31.029804    4454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:54:31.032811    4454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:54:31.035723    4454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:54:31.038761    4454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:54:31.042030    4454 config.go:182] Loaded profile config "kubernetes-upgrade-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0831 15:54:31.042302    4454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:54:31.046784    4454 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:54:31.053787    4454 start.go:297] selected driver: qemu2
	I0831 15:54:31.053792    4454 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:54:31.053839    4454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:54:31.056159    4454 cni.go:84] Creating CNI manager for ""
	I0831 15:54:31.056236    4454 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:54:31.056279    4454 start.go:340] cluster config:
	{Name:kubernetes-upgrade-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-584000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:54:31.059700    4454 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:54:31.067777    4454 out.go:177] * Starting "kubernetes-upgrade-584000" primary control-plane node in "kubernetes-upgrade-584000" cluster
	I0831 15:54:31.070686    4454 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:54:31.070699    4454 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:54:31.070706    4454 cache.go:56] Caching tarball of preloaded images
	I0831 15:54:31.070761    4454 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:54:31.070766    4454 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 15:54:31.070812    4454 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kubernetes-upgrade-584000/config.json ...
	I0831 15:54:31.071304    4454 start.go:360] acquireMachinesLock for kubernetes-upgrade-584000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:54:31.071331    4454 start.go:364] duration metric: took 21.583µs to acquireMachinesLock for "kubernetes-upgrade-584000"
	I0831 15:54:31.071339    4454 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:54:31.071344    4454 fix.go:54] fixHost starting: 
	I0831 15:54:31.071452    4454 fix.go:112] recreateIfNeeded on kubernetes-upgrade-584000: state=Stopped err=<nil>
	W0831 15:54:31.071460    4454 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:54:31.075819    4454 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-584000" ...
	I0831 15:54:31.078736    4454 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:54:31.078769    4454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9c:18:50:af:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:31.080534    4454 main.go:141] libmachine: STDOUT: 
	I0831 15:54:31.080564    4454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:54:31.080590    4454 fix.go:56] duration metric: took 9.246667ms for fixHost
	I0831 15:54:31.080593    4454 start.go:83] releasing machines lock for "kubernetes-upgrade-584000", held for 9.258542ms
	W0831 15:54:31.080599    4454 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:54:31.080633    4454 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:54:31.080637    4454 start.go:729] Will try again in 5 seconds ...
	I0831 15:54:36.082651    4454 start.go:360] acquireMachinesLock for kubernetes-upgrade-584000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:54:36.083190    4454 start.go:364] duration metric: took 433.542µs to acquireMachinesLock for "kubernetes-upgrade-584000"
	I0831 15:54:36.083273    4454 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:54:36.083290    4454 fix.go:54] fixHost starting: 
	I0831 15:54:36.083910    4454 fix.go:112] recreateIfNeeded on kubernetes-upgrade-584000: state=Stopped err=<nil>
	W0831 15:54:36.083931    4454 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:54:36.093695    4454 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-584000" ...
	I0831 15:54:36.097704    4454 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:54:36.097927    4454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:9c:18:50:af:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubernetes-upgrade-584000/disk.qcow2
	I0831 15:54:36.105139    4454 main.go:141] libmachine: STDOUT: 
	I0831 15:54:36.105190    4454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 15:54:36.105268    4454 fix.go:56] duration metric: took 21.981375ms for fixHost
	I0831 15:54:36.105282    4454 start.go:83] releasing machines lock for "kubernetes-upgrade-584000", held for 22.072833ms
	W0831 15:54:36.105475    4454 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 15:54:36.113705    4454 out.go:201] 
	W0831 15:54:36.116770    4454 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 15:54:36.116785    4454 out.go:270] * 
	* 
	W0831 15:54:36.118512    4454 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:54:36.127639    4454 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-584000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-584000 version --output=json: exit status 1 (57.657041ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-584000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-31 15:54:36.198802 -0700 PDT m=+2983.164574293
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-584000 -n kubernetes-upgrade-584000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-584000 -n kubernetes-upgrade-584000: exit status 7 (31.48975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "kubernetes-upgrade-584000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-584000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-584000
--- FAIL: TestKubernetesUpgrade (18.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18943
- KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1973475332/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18943
- KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2948336831/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (591.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.123482109 start -p stopped-upgrade-231000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.123482109 start -p stopped-upgrade-231000 --memory=2200 --vm-driver=qemu2 : (55.170886333s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.123482109 -p stopped-upgrade-231000 stop
E0831 15:55:35.479056    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.123482109 -p stopped-upgrade-231000 stop: (12.121900542s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0831 15:57:07.190804    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:57:32.384797    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 16:00:10.274701    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.844214167s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-231000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-231000" primary control-plane node in "stopped-upgrade-231000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-231000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:55:44.556037    4510 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:55:44.556186    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:55:44.556192    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 15:55:44.556195    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:55:44.556346    4510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:55:44.557645    4510 out.go:352] Setting JSON to false
	I0831 15:55:44.577313    4510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3308,"bootTime":1725141636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:55:44.577384    4510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:55:44.582566    4510 out.go:177] * [stopped-upgrade-231000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:55:44.589458    4510 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:55:44.589502    4510 notify.go:220] Checking for updates...
	I0831 15:55:44.597483    4510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:55:44.600448    4510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:55:44.604506    4510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:55:44.607451    4510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:55:44.610603    4510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:55:44.613709    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:55:44.617416    4510 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 15:55:44.620498    4510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:55:44.624418    4510 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:55:44.631465    4510 start.go:297] selected driver: qemu2
	I0831 15:55:44.631472    4510 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:55:44.631531    4510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:55:44.634085    4510 cni.go:84] Creating CNI manager for ""
	I0831 15:55:44.634102    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:55:44.634129    4510 start.go:340] cluster config:
	{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:55:44.634183    4510 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:55:44.641411    4510 out.go:177] * Starting "stopped-upgrade-231000" primary control-plane node in "stopped-upgrade-231000" cluster
	I0831 15:55:44.645442    4510 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:55:44.645467    4510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0831 15:55:44.645477    4510 cache.go:56] Caching tarball of preloaded images
	I0831 15:55:44.645538    4510 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 15:55:44.645543    4510 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0831 15:55:44.645596    4510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0831 15:55:44.646116    4510 start.go:360] acquireMachinesLock for stopped-upgrade-231000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 15:55:44.646153    4510 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "stopped-upgrade-231000"
	I0831 15:55:44.646161    4510 start.go:96] Skipping create...Using existing machine configuration
	I0831 15:55:44.646165    4510 fix.go:54] fixHost starting: 
	I0831 15:55:44.646274    4510 fix.go:112] recreateIfNeeded on stopped-upgrade-231000: state=Stopped err=<nil>
	W0831 15:55:44.646282    4510 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 15:55:44.653468    4510 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-231000" ...
	I0831 15:55:44.656441    4510 qemu.go:418] Using hvf for hardware acceleration
	I0831 15:55:44.656519    4510 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50489-:22,hostfwd=tcp::50490-:2376,hostname=stopped-upgrade-231000 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/disk.qcow2
	I0831 15:55:44.701578    4510 main.go:141] libmachine: STDOUT: 
	I0831 15:55:44.701606    4510 main.go:141] libmachine: STDERR: 
	I0831 15:55:44.701611    4510 main.go:141] libmachine: Waiting for VM to start (ssh -p 50489 docker@127.0.0.1)...
	I0831 15:56:04.612310    4510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0831 15:56:04.612998    4510 machine.go:93] provisionDockerMachine start ...
	I0831 15:56:04.613156    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.613636    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.613650    4510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 15:56:04.703744    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0831 15:56:04.703774    4510 buildroot.go:166] provisioning hostname "stopped-upgrade-231000"
	I0831 15:56:04.703911    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.704142    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.704166    4510 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-231000 && echo "stopped-upgrade-231000" | sudo tee /etc/hostname
	I0831 15:56:04.788902    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-231000
	
	I0831 15:56:04.789000    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:04.789202    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:04.789214    4510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-231000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-231000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-231000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 15:56:04.865209    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 15:56:04.865225    4510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18943-972/.minikube CaCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18943-972/.minikube}
	I0831 15:56:04.865235    4510 buildroot.go:174] setting up certificates
	I0831 15:56:04.865240    4510 provision.go:84] configureAuth start
	I0831 15:56:04.865245    4510 provision.go:143] copyHostCerts
	I0831 15:56:04.865357    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem, removing ...
	I0831 15:56:04.865369    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem
	I0831 15:56:04.865523    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/ca.pem (1082 bytes)
	I0831 15:56:04.865776    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem, removing ...
	I0831 15:56:04.865781    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem
	I0831 15:56:04.865913    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/cert.pem (1123 bytes)
	I0831 15:56:04.866068    4510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem, removing ...
	I0831 15:56:04.866072    4510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem
	I0831 15:56:04.866138    4510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18943-972/.minikube/key.pem (1679 bytes)
	I0831 15:56:04.866247    4510 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-231000 san=[127.0.0.1 localhost minikube stopped-upgrade-231000]
	I0831 15:56:05.018380    4510 provision.go:177] copyRemoteCerts
	I0831 15:56:05.018430    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 15:56:05.018439    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.055333    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 15:56:05.062660    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 15:56:05.069371    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0831 15:56:05.075910    4510 provision.go:87] duration metric: took 210.668625ms to configureAuth
	I0831 15:56:05.075921    4510 buildroot.go:189] setting minikube options for container-runtime
	I0831 15:56:05.076036    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 15:56:05.076073    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.076193    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.076200    4510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 15:56:05.142893    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0831 15:56:05.142904    4510 buildroot.go:70] root file system type: tmpfs
	I0831 15:56:05.142964    4510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 15:56:05.143018    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.143134    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.143169    4510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 15:56:05.213813    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 15:56:05.213864    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.213982    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.213990    4510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 15:56:05.546964    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0831 15:56:05.546977    4510 machine.go:96] duration metric: took 933.996917ms to provisionDockerMachine
	I0831 15:56:05.546985    4510 start.go:293] postStartSetup for "stopped-upgrade-231000" (driver="qemu2")
	I0831 15:56:05.546992    4510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 15:56:05.547046    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 15:56:05.547055    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.583737    4510 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 15:56:05.584998    4510 info.go:137] Remote host: Buildroot 2021.02.12
	I0831 15:56:05.585005    4510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/addons for local assets ...
	I0831 15:56:05.585096    4510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18943-972/.minikube/files for local assets ...
	I0831 15:56:05.585217    4510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem -> 14322.pem in /etc/ssl/certs
	I0831 15:56:05.585343    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 15:56:05.587914    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:56:05.595189    4510 start.go:296] duration metric: took 48.199417ms for postStartSetup
	I0831 15:56:05.595212    4510 fix.go:56] duration metric: took 20.949661041s for fixHost
	I0831 15:56:05.595258    4510 main.go:141] libmachine: Using SSH client type: native
	I0831 15:56:05.595377    4510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507c5a0] 0x10507ee00 <nil>  [] 0s} localhost 50489 <nil> <nil>}
	I0831 15:56:05.595383    4510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0831 15:56:05.664268    4510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725144965.190477254
	
	I0831 15:56:05.664280    4510 fix.go:216] guest clock: 1725144965.190477254
	I0831 15:56:05.664284    4510 fix.go:229] Guest: 2024-08-31 15:56:05.190477254 -0700 PDT Remote: 2024-08-31 15:56:05.595214 -0700 PDT m=+21.069833043 (delta=-404.736746ms)
	I0831 15:56:05.664301    4510 fix.go:200] guest clock delta is within tolerance: -404.736746ms
	I0831 15:56:05.664304    4510 start.go:83] releasing machines lock for "stopped-upgrade-231000", held for 21.018762833s
	I0831 15:56:05.664388    4510 ssh_runner.go:195] Run: cat /version.json
	I0831 15:56:05.664400    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 15:56:05.664389    4510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 15:56:05.664439    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	W0831 15:56:05.665233    4510 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50489: connect: connection refused
	I0831 15:56:05.665258    4510 retry.go:31] will retry after 307.677355ms: dial tcp [::1]:50489: connect: connection refused
	W0831 15:56:05.700320    4510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0831 15:56:05.700378    4510 ssh_runner.go:195] Run: systemctl --version
	I0831 15:56:05.702203    4510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0831 15:56:05.703819    4510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 15:56:05.703840    4510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0831 15:56:05.707157    4510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0831 15:56:05.711759    4510 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0831 15:56:05.711767    4510 start.go:495] detecting cgroup driver to use...
	I0831 15:56:05.711846    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:56:05.718722    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0831 15:56:05.721575    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 15:56:05.724232    4510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 15:56:05.724255    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 15:56:05.727457    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:56:05.730523    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 15:56:05.733174    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 15:56:05.736116    4510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 15:56:05.739678    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 15:56:05.742899    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 15:56:05.745813    4510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 15:56:05.748640    4510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 15:56:05.751652    4510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 15:56:05.754467    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:05.812683    4510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 15:56:05.819557    4510 start.go:495] detecting cgroup driver to use...
	I0831 15:56:05.819628    4510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 15:56:05.824872    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:56:05.829790    4510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 15:56:05.836920    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 15:56:05.841214    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:56:05.846012    4510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0831 15:56:05.903564    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 15:56:05.909530    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 15:56:05.915275    4510 ssh_runner.go:195] Run: which cri-dockerd
	I0831 15:56:05.916679    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 15:56:05.919367    4510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0831 15:56:05.924345    4510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 15:56:05.985249    4510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 15:56:06.045953    4510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 15:56:06.046013    4510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 15:56:06.051232    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:06.113402    4510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:56:07.243159    4510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1297735s)
	I0831 15:56:07.243216    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 15:56:07.248028    4510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0831 15:56:07.254914    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:56:07.260918    4510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 15:56:07.323474    4510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 15:56:07.387804    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:07.453076    4510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 15:56:07.459310    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 15:56:07.464157    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:07.525664    4510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 15:56:07.565281    4510 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 15:56:07.565373    4510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 15:56:07.567544    4510 start.go:563] Will wait 60s for crictl version
	I0831 15:56:07.567598    4510 ssh_runner.go:195] Run: which crictl
	I0831 15:56:07.568889    4510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 15:56:07.583334    4510 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0831 15:56:07.583399    4510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:56:07.601707    4510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 15:56:07.622316    4510 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0831 15:56:07.622383    4510 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0831 15:56:07.623709    4510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:56:07.627849    4510 kubeadm.go:883] updating cluster {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0831 15:56:07.627899    4510 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0831 15:56:07.627938    4510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:56:07.638212    4510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:56:07.638221    4510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:56:07.638270    4510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:56:07.641331    4510 ssh_runner.go:195] Run: which lz4
	I0831 15:56:07.642659    4510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0831 15:56:07.643948    4510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0831 15:56:07.643957    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0831 15:56:08.552343    4510 docker.go:649] duration metric: took 909.747875ms to copy over tarball
	I0831 15:56:08.552399    4510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0831 15:56:09.720334    4510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.167956041s)
	I0831 15:56:09.720351    4510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0831 15:56:09.735976    4510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0831 15:56:09.739315    4510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0831 15:56:09.744621    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:09.804420    4510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 15:56:11.066491    4510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2620915s)
	I0831 15:56:11.066573    4510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 15:56:11.078769    4510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 15:56:11.078779    4510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0831 15:56:11.078785    4510 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0831 15:56:11.084458    4510 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.086184    4510 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.087948    4510 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.088533    4510 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.089543    4510 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.089931    4510 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.090603    4510 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.092402    4510 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.092789    4510 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.092855    4510 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0831 15:56:11.094082    4510 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.094081    4510 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.094967    4510 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0831 15:56:11.095002    4510 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.095877    4510 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.096416    4510 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.510405    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.522609    4510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0831 15:56:11.522633    4510 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.522686    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0831 15:56:11.532785    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0831 15:56:11.544365    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.545896    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0831 15:56:11.549447    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.551852    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.561526    4510 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0831 15:56:11.561555    4510 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.561611    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0831 15:56:11.569542    4510 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0831 15:56:11.569577    4510 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0831 15:56:11.569631    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0831 15:56:11.577559    4510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0831 15:56:11.577577    4510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0831 15:56:11.577583    4510 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.577586    4510 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.577632    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0831 15:56:11.577632    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0831 15:56:11.577664    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0831 15:56:11.593877    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0831 15:56:11.594000    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0831 15:56:11.594957    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0831 15:56:11.594976    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0831 15:56:11.596512    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0831 15:56:11.596525    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0831 15:56:11.603782    4510 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0831 15:56:11.603791    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0831 15:56:11.610093    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0831 15:56:11.621420    4510 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0831 15:56:11.621548    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.643213    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0831 15:56:11.643296    4510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0831 15:56:11.643317    4510 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.643372    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0831 15:56:11.649672    4510 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0831 15:56:11.649695    4510 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.649730    4510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0831 15:56:11.661941    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0831 15:56:11.661947    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0831 15:56:11.662067    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:56:11.663490    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0831 15:56:11.663503    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0831 15:56:11.705030    4510 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0831 15:56:11.705043    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0831 15:56:11.739935    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0831 15:56:11.897343    4510 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0831 15:56:11.897550    4510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.917187    4510 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0831 15:56:11.917227    4510 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.917322    4510 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 15:56:11.937863    4510 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0831 15:56:11.938002    4510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:56:11.939673    4510 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0831 15:56:11.939685    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0831 15:56:11.969450    4510 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0831 15:56:11.969466    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0831 15:56:12.205840    4510 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0831 15:56:12.205878    4510 cache_images.go:92] duration metric: took 1.12711875s to LoadCachedImages
	W0831 15:56:12.205919    4510 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0831 15:56:12.205928    4510 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0831 15:56:12.205982    4510 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-231000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 15:56:12.206041    4510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 15:56:12.219524    4510 cni.go:84] Creating CNI manager for ""
	I0831 15:56:12.219536    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:56:12.219540    4510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 15:56:12.219549    4510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-231000 NodeName:stopped-upgrade-231000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 15:56:12.219619    4510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-231000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 15:56:12.219666    4510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0831 15:56:12.223050    4510 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 15:56:12.223080    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 15:56:12.225739    4510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0831 15:56:12.230602    4510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 15:56:12.235536    4510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0831 15:56:12.240931    4510 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0831 15:56:12.242152    4510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 15:56:12.245623    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 15:56:12.310130    4510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 15:56:12.319798    4510 certs.go:68] Setting up /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000 for IP: 10.0.2.15
	I0831 15:56:12.319808    4510 certs.go:194] generating shared ca certs ...
	I0831 15:56:12.319817    4510 certs.go:226] acquiring lock for ca certs: {Name:mk9afb7f1a72cfb2bc15164a899be0d76a69591d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.319982    4510 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key
	I0831 15:56:12.320030    4510 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key
	I0831 15:56:12.320038    4510 certs.go:256] generating profile certs ...
	I0831 15:56:12.320096    4510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key
	I0831 15:56:12.320115    4510 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec
	I0831 15:56:12.320129    4510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0831 15:56:12.412516    4510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec ...
	I0831 15:56:12.412529    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec: {Name:mkd4f562fa4ad0db149e90b6cef4c0661f21a8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.413896    4510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec ...
	I0831 15:56:12.413902    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec: {Name:mk6d7b1a754e8e097f3c4a5eb00b90c52f3490a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.414043    4510 certs.go:381] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt
	I0831 15:56:12.414176    4510 certs.go:385] copying /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec -> /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key
	I0831 15:56:12.414340    4510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.key
	I0831 15:56:12.414481    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem (1338 bytes)
	W0831 15:56:12.414508    4510 certs.go:480] ignoring /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432_empty.pem, impossibly tiny 0 bytes
	I0831 15:56:12.414513    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 15:56:12.414539    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem (1082 bytes)
	I0831 15:56:12.414558    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem (1123 bytes)
	I0831 15:56:12.414575    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/certs/key.pem (1679 bytes)
	I0831 15:56:12.414618    4510 certs.go:484] found cert: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem (1708 bytes)
	I0831 15:56:12.414937    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 15:56:12.421998    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 15:56:12.428932    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 15:56:12.435961    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 15:56:12.443419    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 15:56:12.450771    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 15:56:12.457667    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 15:56:12.464423    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 15:56:12.471470    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/certs/1432.pem --> /usr/share/ca-certificates/1432.pem (1338 bytes)
	I0831 15:56:12.478782    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/ssl/certs/14322.pem --> /usr/share/ca-certificates/14322.pem (1708 bytes)
	I0831 15:56:12.485587    4510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 15:56:12.492428    4510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 15:56:12.497522    4510 ssh_runner.go:195] Run: openssl version
	I0831 15:56:12.499334    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1432.pem && ln -fs /usr/share/ca-certificates/1432.pem /etc/ssl/certs/1432.pem"
	I0831 15:56:12.502700    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.504130    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:20 /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.504149    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1432.pem
	I0831 15:56:12.506004    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1432.pem /etc/ssl/certs/51391683.0"
	I0831 15:56:12.508863    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14322.pem && ln -fs /usr/share/ca-certificates/14322.pem /etc/ssl/certs/14322.pem"
	I0831 15:56:12.512226    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.513702    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:20 /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.513724    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14322.pem
	I0831 15:56:12.515466    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14322.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 15:56:12.519114    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 15:56:12.522249    4510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.523721    4510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:05 /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.523743    4510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 15:56:12.525531    4510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 15:56:12.528401    4510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 15:56:12.529957    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 15:56:12.531773    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 15:56:12.534590    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 15:56:12.536692    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 15:56:12.538610    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 15:56:12.540328    4510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 15:56:12.542350    4510 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50524 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0831 15:56:12.542413    4510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:56:12.552609    4510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 15:56:12.556946    4510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 15:56:12.556952    4510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 15:56:12.556978    4510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 15:56:12.560243    4510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 15:56:12.560542    4510 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-231000" does not appear in /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:56:12.560639    4510 kubeconfig.go:62] /Users/jenkins/minikube-integration/18943-972/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-231000" cluster setting kubeconfig missing "stopped-upgrade-231000" context setting]
	I0831 15:56:12.560822    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:56:12.561271    4510 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106638030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 15:56:12.561587    4510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 15:56:12.564257    4510 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-231000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0831 15:56:12.564263    4510 kubeadm.go:1160] stopping kube-system containers ...
	I0831 15:56:12.564301    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 15:56:12.576768    4510 docker.go:483] Stopping containers: [bd81755e0db4 8c8721bd1a75 c5f4cc8bd872 f4db5f2a3437 4873f60be323 63e4154d4309 effb4c0615cc b4a24276698b]
	I0831 15:56:12.576832    4510 ssh_runner.go:195] Run: docker stop bd81755e0db4 8c8721bd1a75 c5f4cc8bd872 f4db5f2a3437 4873f60be323 63e4154d4309 effb4c0615cc b4a24276698b
	I0831 15:56:12.587357    4510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0831 15:56:12.593269    4510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 15:56:12.596307    4510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 15:56:12.596315    4510 kubeadm.go:157] found existing configuration files:
	
	I0831 15:56:12.596353    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf
	I0831 15:56:12.599491    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 15:56:12.599523    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 15:56:12.602795    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf
	I0831 15:56:12.605327    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 15:56:12.605358    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 15:56:12.607970    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf
	I0831 15:56:12.610649    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 15:56:12.610671    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 15:56:12.613239    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf
	I0831 15:56:12.615844    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 15:56:12.615867    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 15:56:12.619029    4510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 15:56:12.622196    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:12.646401    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.034063    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.145112    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.171200    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0831 15:56:13.191995    4510 api_server.go:52] waiting for apiserver process to appear ...
	I0831 15:56:13.192074    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:13.694127    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:14.194093    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 15:56:14.198295    4510 api_server.go:72] duration metric: took 1.006330625s to wait for apiserver process to appear ...
	I0831 15:56:14.198304    4510 api_server.go:88] waiting for apiserver healthz status ...
	I0831 15:56:14.198313    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:19.200247    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:19.200270    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:24.200326    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:24.200340    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:29.200484    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:29.200525    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:34.200821    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:34.200880    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:39.201285    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:39.201330    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:44.202002    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:44.202024    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:49.202781    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:49.202860    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:54.204040    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:54.204084    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:56:59.205529    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:56:59.205607    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:04.207856    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:04.207894    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:09.209326    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:09.209409    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:14.211886    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:14.212096    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:14.235354    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:14.235457    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:14.250754    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:14.250836    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:14.265853    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:14.265931    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:14.276919    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:14.277000    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:14.286959    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:14.287031    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:14.297645    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:14.297710    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:14.308118    4510 logs.go:276] 0 containers: []
	W0831 15:57:14.308131    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:14.308200    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:14.319074    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:14.319091    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:14.319096    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:14.323584    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:14.323592    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:14.342616    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:14.342627    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:14.359239    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:14.359254    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:14.373014    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:14.373025    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:14.390416    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:14.390427    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:14.401439    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:14.401449    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:14.412656    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:14.412666    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:14.438184    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:14.438195    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:14.450100    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:14.450111    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:14.467494    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:14.467504    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:14.504725    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:14.504734    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:14.532293    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:14.532304    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:14.545925    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:14.545935    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:14.557184    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:14.557195    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:14.648476    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:14.648489    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:14.662458    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:14.662471    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:17.179545    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:22.181768    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:22.181944    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:22.196772    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:22.196852    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:22.207835    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:22.207910    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:22.218120    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:22.218178    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:22.228769    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:22.228857    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:22.239122    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:22.239195    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:22.249660    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:22.249727    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:22.259250    4510 logs.go:276] 0 containers: []
	W0831 15:57:22.259260    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:22.259313    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:22.272645    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:22.272667    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:22.272672    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:22.288257    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:22.288265    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:22.312247    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:22.312255    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:22.337294    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:22.337305    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:22.341597    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:22.341603    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:22.355660    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:22.355670    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:22.370290    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:22.370299    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:22.381843    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:22.381853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:22.399764    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:22.399773    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:22.437370    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:22.437380    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:22.450913    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:22.450926    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:22.469192    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:22.469202    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:22.480834    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:22.480846    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:22.517617    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:22.517630    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:22.530716    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:22.530727    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:22.542511    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:22.542523    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:22.554423    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:22.554434    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:25.067985    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:30.070301    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:30.070754    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:30.109112    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:30.109240    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:30.130457    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:30.130548    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:30.144992    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:30.145072    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:30.157375    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:30.157461    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:30.167991    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:30.168055    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:30.178427    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:30.178499    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:30.188836    4510 logs.go:276] 0 containers: []
	W0831 15:57:30.188846    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:30.188906    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:30.199931    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:30.199958    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:30.199965    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:30.211853    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:30.211863    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:30.223398    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:30.223407    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:30.262885    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:30.262895    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:30.277350    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:30.277361    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:30.295061    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:30.295072    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:30.332137    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:30.332151    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:30.358171    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:30.358183    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:30.370618    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:30.370629    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:30.396686    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:30.396696    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:30.408286    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:30.408299    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:30.425519    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:30.425532    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:30.442568    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:30.442579    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:30.459743    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:30.459753    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:30.471276    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:30.471286    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:30.475942    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:30.475956    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:30.490451    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:30.490464    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:33.008006    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:38.010523    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:38.010666    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:38.022690    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:38.022767    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:38.033054    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:38.033117    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:38.043522    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:38.043594    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:38.057597    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:38.057661    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:38.067662    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:38.067733    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:38.078199    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:38.078265    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:38.088340    4510 logs.go:276] 0 containers: []
	W0831 15:57:38.088351    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:38.088409    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:38.099273    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:38.099290    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:38.099295    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:38.110394    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:38.110405    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:38.123252    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:38.123265    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:38.142214    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:38.142226    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:38.153064    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:38.153077    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:38.178584    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:38.178605    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:38.212678    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:38.212690    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:38.238153    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:38.238166    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:38.255601    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:38.255613    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:38.270617    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:38.270628    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:38.307834    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:38.307840    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:38.321529    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:38.321540    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:38.333492    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:38.333503    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:38.349219    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:38.349230    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:38.364069    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:38.364083    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:38.376684    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:38.376695    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:38.394074    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:38.394088    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:40.900214    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:45.901833    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:45.901945    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:45.912637    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:45.912715    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:45.923936    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:45.924006    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:45.938292    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:45.938360    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:45.949213    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:45.949282    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:45.959330    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:45.959393    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:45.969825    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:45.969894    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:45.980322    4510 logs.go:276] 0 containers: []
	W0831 15:57:45.980332    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:45.980386    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:45.990654    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:45.990672    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:45.990678    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:46.025720    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:46.025734    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:46.039969    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:46.039979    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:46.056571    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:46.056581    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:46.071200    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:46.071213    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:46.082663    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:46.082672    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:46.094342    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:46.094354    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:46.132035    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:46.132043    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:46.143119    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:46.143131    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:46.156914    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:46.156924    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:46.161067    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:46.161073    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:46.187488    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:46.187502    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:46.204734    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:46.204744    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:46.219038    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:46.219049    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:46.234043    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:46.234054    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:46.246030    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:46.246040    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:46.256944    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:46.256956    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:48.782228    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:57:53.784484    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:57:53.784683    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:57:53.801615    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:57:53.801705    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:57:53.815138    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:57:53.815204    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:57:53.826857    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:57:53.826923    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:57:53.837213    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:57:53.837292    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:57:53.847841    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:57:53.847900    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:57:53.858425    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:57:53.858487    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:57:53.868919    4510 logs.go:276] 0 containers: []
	W0831 15:57:53.868931    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:57:53.868992    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:57:53.883151    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:57:53.883169    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:57:53.883176    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:57:53.922982    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:57:53.922993    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:57:53.937187    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:57:53.937197    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:57:53.949486    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:57:53.949500    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:57:53.970800    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:57:53.970811    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:57:53.985574    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:57:53.985587    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:57:54.001334    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:57:54.001344    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:57:54.013624    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:57:54.013635    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:57:54.038281    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:57:54.038292    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:57:54.055787    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:57:54.055799    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:57:54.067793    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:57:54.067805    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:57:54.079041    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:57:54.079051    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:57:54.104241    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:57:54.104249    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:57:54.143256    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:57:54.143269    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:57:54.147454    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:57:54.147464    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:57:54.161538    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:57:54.161549    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:57:54.173399    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:57:54.173411    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:57:56.691184    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:01.691495    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:01.691919    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:01.722413    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:01.722526    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:01.740229    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:01.740316    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:01.754284    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:01.754360    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:01.768462    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:01.768529    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:01.778758    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:01.778831    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:01.788970    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:01.789035    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:01.798919    4510 logs.go:276] 0 containers: []
	W0831 15:58:01.798931    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:01.798988    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:01.809789    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:01.809807    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:01.809814    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:01.824134    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:01.824145    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:01.835767    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:01.835779    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:01.874808    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:01.874823    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:01.891547    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:01.891560    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:01.908499    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:01.908512    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:01.920022    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:01.920033    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:01.933318    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:01.933332    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:01.953498    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:01.953511    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:01.967577    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:01.967589    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:01.978739    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:01.978750    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:01.995830    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:01.995838    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:02.008410    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:02.008420    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:02.057442    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:02.057455    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:02.084852    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:02.084866    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:02.097752    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:02.097764    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:02.124283    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:02.124300    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:04.630870    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:09.633018    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:09.633253    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:09.655057    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:09.655162    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:09.669666    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:09.669739    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:09.681812    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:09.681874    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:09.692464    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:09.692534    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:09.705253    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:09.705332    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:09.716256    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:09.716319    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:09.726825    4510 logs.go:276] 0 containers: []
	W0831 15:58:09.726835    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:09.726889    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:09.737414    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:09.737433    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:09.737439    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:09.748515    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:09.748527    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:09.773894    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:09.773909    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:09.788302    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:09.788313    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:09.803524    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:09.803534    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:09.828048    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:09.828057    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:09.840261    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:09.840273    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:09.880203    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:09.880222    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:09.895538    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:09.895549    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:09.913836    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:09.913853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:09.930293    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:09.930305    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:09.949360    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:09.949368    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:09.964159    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:09.964169    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:10.001763    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:10.001774    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:10.019402    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:10.019415    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:10.031741    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:10.031756    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:10.043794    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:10.043806    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:12.550254    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:17.550928    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:17.551152    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:17.571672    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:17.571768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:17.586036    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:17.586108    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:17.600286    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:17.600351    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:17.611082    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:17.611159    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:17.621545    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:17.621610    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:17.632233    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:17.632299    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:17.642209    4510 logs.go:276] 0 containers: []
	W0831 15:58:17.642220    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:17.642275    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:17.656730    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:17.656752    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:17.656758    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:17.672441    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:17.672451    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:17.686006    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:17.686018    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:17.709745    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:17.709757    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:17.735355    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:17.735374    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:17.773572    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:17.773583    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:17.789840    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:17.789850    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:17.794561    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:17.794571    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:17.823128    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:17.823142    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:17.838312    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:17.838323    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:17.850054    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:17.850066    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:17.868081    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:17.868093    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:17.880259    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:17.880270    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:17.900134    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:17.900147    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:17.912631    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:17.912648    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:17.952539    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:17.952554    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:17.967758    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:17.967770    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:20.492574    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:25.494876    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:25.495113    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:25.518117    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:25.518216    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:25.536318    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:25.536364    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:25.549188    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:25.549220    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:25.561365    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:25.561436    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:25.575764    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:25.575839    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:25.587247    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:25.587323    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:25.598511    4510 logs.go:276] 0 containers: []
	W0831 15:58:25.598523    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:25.598580    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:25.610205    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:25.610225    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:25.610231    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:25.622269    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:25.622284    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:25.636599    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:25.636610    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:25.662978    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:25.662987    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:25.705929    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:25.705946    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:25.710780    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:25.710787    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:25.729030    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:25.729040    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:25.756273    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:25.756296    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:25.769546    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:25.769557    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:25.785058    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:25.785070    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:25.797020    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:25.797029    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:25.810094    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:25.810104    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:25.847481    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:25.847497    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:25.862516    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:25.862528    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:25.881230    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:25.881244    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:25.893016    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:25.893027    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:25.910725    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:25.910735    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:28.429952    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:33.431990    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:33.432051    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:33.443635    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:33.443706    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:33.456207    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:33.456280    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:33.472175    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:33.472244    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:33.485145    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:33.485216    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:33.496339    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:33.496408    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:33.508734    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:33.508821    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:33.519896    4510 logs.go:276] 0 containers: []
	W0831 15:58:33.519910    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:33.519972    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:33.531457    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:33.531479    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:33.531486    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:33.557712    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:33.557726    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:33.571062    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:33.571078    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:33.585008    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:33.585022    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:33.597404    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:33.597417    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:33.602214    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:33.602225    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:33.621174    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:33.621185    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:33.637221    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:33.637229    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:33.657466    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:33.657483    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:33.673839    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:33.673854    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:33.686529    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:33.686540    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:33.706469    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:33.706483    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:33.732357    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:33.732373    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:33.749332    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:33.749343    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:33.765526    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:33.765539    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:33.777999    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:33.778010    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:33.815798    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:33.815810    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:36.352617    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:41.354731    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:41.354819    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:41.366149    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:41.366227    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:41.378799    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:41.378871    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:41.389871    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:41.389937    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:41.401106    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:41.401178    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:41.413040    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:41.413109    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:41.424223    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:41.424290    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:41.438639    4510 logs.go:276] 0 containers: []
	W0831 15:58:41.438651    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:41.438712    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:41.450123    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:41.450142    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:41.450148    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:41.455242    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:41.455252    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:41.469985    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:41.470000    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:41.482836    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:41.482848    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:41.522698    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:41.522713    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:41.541553    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:41.541565    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:41.554149    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:41.554161    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:41.566786    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:41.566797    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:41.586940    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:41.586951    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:41.603543    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:41.603555    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:41.627101    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:41.627113    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:41.642010    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:41.642022    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:41.654850    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:41.654867    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:41.679800    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:41.679820    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:41.693491    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:41.693503    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:41.735350    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:41.735365    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:41.748633    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:41.748645    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:44.277914    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:49.278101    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:49.278184    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:49.289769    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:49.289838    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:49.301421    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:49.301503    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:49.314780    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:49.314849    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:49.326584    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:49.326654    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:49.338486    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:49.338547    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:49.349844    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:49.349918    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:49.369094    4510 logs.go:276] 0 containers: []
	W0831 15:58:49.369107    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:49.369167    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:49.380988    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:49.381007    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:49.381014    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:49.400291    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:49.400307    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:49.416079    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:49.416097    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:49.444948    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:49.444964    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:49.457489    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:49.457499    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:49.469052    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:49.469064    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:49.506934    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:49.506944    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:49.542210    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:49.542224    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:49.558307    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:49.558319    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:58:49.575995    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:49.576006    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:49.600295    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:49.600304    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:49.604546    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:49.604556    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:49.622667    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:49.622680    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:49.633914    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:49.633927    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:49.653098    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:49.653111    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:49.668394    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:49.668405    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:49.680597    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:49.680606    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:52.195046    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:58:57.195641    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:58:57.195714    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:58:57.207008    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:58:57.207081    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:58:57.219057    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:58:57.219132    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:58:57.230840    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:58:57.230907    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:58:57.243034    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:58:57.243115    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:58:57.256293    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:58:57.256361    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:58:57.272102    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:58:57.272175    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:58:57.283111    4510 logs.go:276] 0 containers: []
	W0831 15:58:57.283125    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:58:57.283187    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:58:57.295927    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:58:57.295944    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:58:57.295951    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:58:57.308587    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:58:57.308601    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:58:57.313330    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:58:57.313336    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:58:57.334089    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:58:57.334103    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:58:57.345799    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:58:57.345813    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:58:57.375846    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:58:57.375860    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:58:57.420048    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:58:57.420062    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:58:57.434999    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:58:57.435013    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:58:57.446509    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:58:57.446519    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:58:57.457983    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:58:57.457994    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:58:57.482849    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:58:57.482861    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:58:57.494331    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:58:57.494343    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:58:57.533449    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:58:57.533458    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:58:57.547803    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:58:57.547814    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:58:57.561688    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:58:57.561699    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:58:57.577887    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:58:57.577898    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:58:57.605592    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:58:57.605604    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:00.125195    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:05.127331    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:05.127439    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:05.138701    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:05.138774    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:05.150416    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:05.150487    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:05.161376    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:05.161445    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:05.172370    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:05.172435    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:05.186411    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:05.186469    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:05.196708    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:05.196768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:05.207000    4510 logs.go:276] 0 containers: []
	W0831 15:59:05.207009    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:05.207062    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:05.217591    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:05.217609    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:05.217615    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:05.243556    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:05.243565    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:05.257569    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:05.257583    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:05.273514    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:05.273524    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:05.285665    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:05.285675    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:05.303356    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:05.303366    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:05.314450    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:05.314461    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:05.326064    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:05.326076    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:05.365768    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:05.365777    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:05.399773    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:05.399784    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:05.416586    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:05.416597    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:05.440125    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:05.440135    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:05.454613    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:05.454624    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:05.465891    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:05.465903    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:05.481484    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:05.481495    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:05.498949    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:05.498962    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:05.502994    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:05.503000    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:08.018977    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:13.020648    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:13.020739    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:13.031947    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:13.032020    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:13.042522    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:13.042592    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:13.053012    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:13.053068    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:13.064014    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:13.064076    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:13.075093    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:13.075159    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:13.085506    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:13.085566    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:13.095824    4510 logs.go:276] 0 containers: []
	W0831 15:59:13.095836    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:13.095898    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:13.106331    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:13.106351    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:13.106357    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:13.117848    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:13.117860    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:13.133302    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:13.133315    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:13.145500    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:13.145513    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:13.157543    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:13.157556    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:13.168818    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:13.168828    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:13.193399    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:13.193412    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:13.231180    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:13.231191    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:13.246631    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:13.246642    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:13.264523    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:13.264534    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:13.282052    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:13.282061    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:13.299424    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:13.299435    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:13.311259    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:13.311270    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:13.350862    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:13.350870    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:13.355138    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:13.355146    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:13.380927    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:13.380942    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:13.394525    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:13.394535    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:15.909090    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:20.911264    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:20.911354    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:20.923428    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:20.923499    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:20.933659    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:20.933721    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:20.944489    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:20.944562    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:20.959323    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:20.959387    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:20.969572    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:20.969635    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:20.980111    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:20.980185    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:20.994967    4510 logs.go:276] 0 containers: []
	W0831 15:59:20.994979    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:20.995038    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:21.005426    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:21.005443    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:21.005448    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:21.023428    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:21.023440    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:21.056746    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:21.056758    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:21.068137    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:21.068148    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:21.094053    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:21.094063    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:21.108027    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:21.108038    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:21.125492    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:21.125503    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:21.137700    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:21.137714    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:21.148716    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:21.148727    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:21.160583    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:21.160594    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:21.165175    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:21.165182    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:21.182309    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:21.182322    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:21.197271    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:21.197281    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:21.208938    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:21.208948    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:21.221000    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:21.221011    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:21.245654    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:21.245662    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:21.284908    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:21.284918    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:23.800442    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:28.802552    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:28.802703    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:28.813490    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:28.813557    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:28.823962    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:28.824033    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:28.834478    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:28.834546    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:28.844840    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:28.844907    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:28.855197    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:28.855272    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:28.866106    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:28.866171    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:28.879582    4510 logs.go:276] 0 containers: []
	W0831 15:59:28.879594    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:28.879650    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:28.890330    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:28.890347    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:28.890352    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:28.904212    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:28.904225    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:28.915966    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:28.915976    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:28.927852    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:28.927866    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:28.966558    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:28.966569    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:28.999941    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:28.999955    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:29.014353    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:29.014366    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:29.026475    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:29.026486    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:29.043253    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:29.043263    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:29.067167    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:29.067177    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:29.071387    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:29.071395    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:29.088965    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:29.088979    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:29.114516    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:29.114528    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:29.131823    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:29.131837    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:29.142664    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:29.142675    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:29.159251    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:29.159264    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:29.178270    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:29.178280    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:31.691551    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:36.693657    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:36.693758    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:36.709954    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:36.710029    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:36.720634    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:36.720707    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:36.731609    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:36.731681    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:36.742036    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:36.742102    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:36.752910    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:36.752986    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:36.764130    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:36.764193    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:36.779051    4510 logs.go:276] 0 containers: []
	W0831 15:59:36.779063    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:36.779123    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:36.789729    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:36.789751    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:36.789757    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:36.807509    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:36.807524    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:36.828837    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:36.828848    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:36.847609    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:36.847622    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:36.858984    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:36.858994    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:36.863225    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:36.863233    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:36.890524    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:36.890535    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:36.904022    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:36.904035    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:36.921629    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:36.921639    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:36.933595    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:36.933607    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:36.946154    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:36.946165    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:36.957396    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:36.957408    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:36.971367    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:36.971376    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:36.982270    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:36.982279    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:36.993615    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:36.993624    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:37.015663    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:37.015673    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:37.052375    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:37.052384    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:39.587684    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:44.590079    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:44.590171    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:44.601654    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:44.601728    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:44.616098    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:44.616166    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:44.627146    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:44.627203    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:44.638470    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:44.638539    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:44.649007    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:44.649065    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:44.662641    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:44.662715    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:44.673124    4510 logs.go:276] 0 containers: []
	W0831 15:59:44.673134    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:44.673189    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:44.683999    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:44.684021    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:44.684029    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:44.700466    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:44.700477    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:44.715691    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:44.715705    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:44.727484    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:44.727496    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:44.751892    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:44.751904    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:44.765034    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:44.765045    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:44.800232    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:44.800244    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:44.826712    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:44.826724    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:44.841121    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:44.841135    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:44.856310    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:44.856323    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:44.868606    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:44.868619    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:44.880525    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:44.880535    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:44.918051    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:44.918061    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:44.922111    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:44.922119    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:44.935877    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:44.935893    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:44.953132    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:44.953142    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:44.971216    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:44.971227    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:47.483712    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 15:59:52.485922    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 15:59:52.486081    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 15:59:52.497376    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 15:59:52.497443    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 15:59:52.507757    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 15:59:52.507819    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 15:59:52.518082    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 15:59:52.518152    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 15:59:52.528655    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 15:59:52.528716    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 15:59:52.539175    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 15:59:52.539250    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 15:59:52.549452    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 15:59:52.549513    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 15:59:52.562922    4510 logs.go:276] 0 containers: []
	W0831 15:59:52.562933    4510 logs.go:278] No container was found matching "kindnet"
	I0831 15:59:52.562985    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 15:59:52.573598    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 15:59:52.573613    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 15:59:52.573617    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 15:59:52.599174    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 15:59:52.599187    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 15:59:52.613580    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 15:59:52.613591    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 15:59:52.624591    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 15:59:52.624603    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 15:59:52.636207    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 15:59:52.636219    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 15:59:52.647415    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 15:59:52.647428    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 15:59:52.683490    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 15:59:52.683504    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 15:59:52.698771    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 15:59:52.698785    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 15:59:52.711471    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 15:59:52.711482    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 15:59:52.723384    4510 logs.go:123] Gathering logs for Docker ...
	I0831 15:59:52.723395    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 15:59:52.747927    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 15:59:52.747945    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 15:59:52.762222    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 15:59:52.762236    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 15:59:52.779610    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 15:59:52.779619    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 15:59:52.794279    4510 logs.go:123] Gathering logs for container status ...
	I0831 15:59:52.794292    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 15:59:52.806423    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 15:59:52.806433    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 15:59:52.844145    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 15:59:52.844156    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 15:59:52.848847    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 15:59:52.848853    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 15:59:55.369048    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:00.371208    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:00.371309    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:00.386619    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 16:00:00.386693    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:00.396982    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 16:00:00.397052    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:00.407885    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 16:00:00.407951    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:00.418334    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 16:00:00.418394    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:00.428852    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 16:00:00.428921    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:00.443166    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 16:00:00.443236    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:00.453104    4510 logs.go:276] 0 containers: []
	W0831 16:00:00.453117    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:00.453170    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:00.463955    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 16:00:00.463971    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:00.463977    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:00.468543    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 16:00:00.468550    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 16:00:00.486484    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:00.486495    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:00.507681    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:00:00.507690    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:00.520150    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:00.520161    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:00.557279    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 16:00:00.557291    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 16:00:00.573577    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 16:00:00.573587    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 16:00:00.592991    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 16:00:00.593002    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 16:00:00.606713    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 16:00:00.606725    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 16:00:00.632231    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 16:00:00.632240    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 16:00:00.647757    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 16:00:00.647770    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 16:00:00.660326    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 16:00:00.660337    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 16:00:00.672085    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 16:00:00.672096    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 16:00:00.684482    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:00.684492    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:00.725611    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 16:00:00.725630    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 16:00:00.739961    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 16:00:00.739972    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 16:00:00.751371    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 16:00:00.751385    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 16:00:03.270086    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:08.272231    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:08.272370    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:00:08.286011    4510 logs.go:276] 2 containers: [c08e846827a6 4873f60be323]
	I0831 16:00:08.286084    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:00:08.297107    4510 logs.go:276] 2 containers: [9cce4c726d8c bd81755e0db4]
	I0831 16:00:08.297191    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:00:08.307613    4510 logs.go:276] 1 containers: [7e4541f74f09]
	I0831 16:00:08.307677    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:00:08.322238    4510 logs.go:276] 2 containers: [e0903a27c990 c5f4cc8bd872]
	I0831 16:00:08.322302    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:00:08.332702    4510 logs.go:276] 1 containers: [c831e4d7fbfa]
	I0831 16:00:08.332768    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:00:08.343571    4510 logs.go:276] 2 containers: [063cc08dba5b 63e4154d4309]
	I0831 16:00:08.343631    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:00:08.354181    4510 logs.go:276] 0 containers: []
	W0831 16:00:08.354193    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:00:08.354247    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:00:08.365581    4510 logs.go:276] 2 containers: [ae2dcbf15836 a0e43d79c525]
	I0831 16:00:08.365599    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:00:08.365605    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:00:08.369611    4510 logs.go:123] Gathering logs for kube-proxy [c831e4d7fbfa] ...
	I0831 16:00:08.369617    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c831e4d7fbfa"
	I0831 16:00:08.381061    4510 logs.go:123] Gathering logs for kube-controller-manager [063cc08dba5b] ...
	I0831 16:00:08.381072    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063cc08dba5b"
	I0831 16:00:08.398077    4510 logs.go:123] Gathering logs for kube-controller-manager [63e4154d4309] ...
	I0831 16:00:08.398089    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e4154d4309"
	I0831 16:00:08.410398    4510 logs.go:123] Gathering logs for storage-provisioner [ae2dcbf15836] ...
	I0831 16:00:08.410409    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2dcbf15836"
	I0831 16:00:08.430625    4510 logs.go:123] Gathering logs for etcd [9cce4c726d8c] ...
	I0831 16:00:08.430636    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cce4c726d8c"
	I0831 16:00:08.444109    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:00:08.444119    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:00:08.456020    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:00:08.456031    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 16:00:08.494763    4510 logs.go:123] Gathering logs for etcd [bd81755e0db4] ...
	I0831 16:00:08.494774    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81755e0db4"
	I0831 16:00:08.509197    4510 logs.go:123] Gathering logs for coredns [7e4541f74f09] ...
	I0831 16:00:08.509206    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4541f74f09"
	I0831 16:00:08.520982    4510 logs.go:123] Gathering logs for kube-scheduler [c5f4cc8bd872] ...
	I0831 16:00:08.520994    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f4cc8bd872"
	I0831 16:00:08.535763    4510 logs.go:123] Gathering logs for storage-provisioner [a0e43d79c525] ...
	I0831 16:00:08.535776    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e43d79c525"
	I0831 16:00:08.546918    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:00:08.546930    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:00:08.582333    4510 logs.go:123] Gathering logs for kube-apiserver [c08e846827a6] ...
	I0831 16:00:08.582344    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08e846827a6"
	I0831 16:00:08.596745    4510 logs.go:123] Gathering logs for kube-apiserver [4873f60be323] ...
	I0831 16:00:08.596756    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4873f60be323"
	I0831 16:00:08.624091    4510 logs.go:123] Gathering logs for kube-scheduler [e0903a27c990] ...
	I0831 16:00:08.624101    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0903a27c990"
	I0831 16:00:08.641214    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:00:08.641227    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:00:11.165705    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:16.167836    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:16.167920    4510 kubeadm.go:597] duration metric: took 4m3.618059958s to restartPrimaryControlPlane
	W0831 16:00:16.167979    4510 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0831 16:00:16.168002    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0831 16:00:17.212590    4510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04460775s)
	I0831 16:00:17.212645    4510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 16:00:17.217753    4510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 16:00:17.220481    4510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 16:00:17.223463    4510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 16:00:17.223470    4510 kubeadm.go:157] found existing configuration files:
	
	I0831 16:00:17.223496    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf
	I0831 16:00:17.226375    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 16:00:17.226409    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 16:00:17.229113    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf
	I0831 16:00:17.231835    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 16:00:17.231858    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 16:00:17.235108    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf
	I0831 16:00:17.237770    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 16:00:17.237793    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 16:00:17.240176    4510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf
	I0831 16:00:17.243091    4510 kubeadm.go:163] "https://control-plane.minikube.internal:50524" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50524 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 16:00:17.243111    4510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 16:00:17.245892    4510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 16:00:17.262440    4510 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0831 16:00:17.262469    4510 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 16:00:17.314827    4510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 16:00:17.314901    4510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 16:00:17.314951    4510 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0831 16:00:17.364372    4510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 16:00:17.368569    4510 out.go:235]   - Generating certificates and keys ...
	I0831 16:00:17.368606    4510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 16:00:17.368656    4510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 16:00:17.368700    4510 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0831 16:00:17.368731    4510 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0831 16:00:17.368774    4510 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0831 16:00:17.368808    4510 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0831 16:00:17.368849    4510 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0831 16:00:17.368883    4510 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0831 16:00:17.368921    4510 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0831 16:00:17.368958    4510 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0831 16:00:17.368978    4510 kubeadm.go:310] [certs] Using the existing "sa" key
	I0831 16:00:17.369010    4510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 16:00:17.440132    4510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 16:00:17.496069    4510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 16:00:17.559948    4510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 16:00:17.647519    4510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 16:00:17.680302    4510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 16:00:17.680666    4510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 16:00:17.680783    4510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 16:00:17.749643    4510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 16:00:17.756790    4510 out.go:235]   - Booting up control plane ...
	I0831 16:00:17.756838    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 16:00:17.756907    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 16:00:17.756981    4510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 16:00:17.757026    4510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 16:00:17.757114    4510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0831 16:00:22.258673    4510 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503158 seconds
	I0831 16:00:22.258821    4510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 16:00:22.262882    4510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 16:00:22.781923    4510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 16:00:22.782250    4510 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-231000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 16:00:23.287616    4510 kubeadm.go:310] [bootstrap-token] Using token: runj8w.lv76pt49rlz5uirc
	I0831 16:00:23.291806    4510 out.go:235]   - Configuring RBAC rules ...
	I0831 16:00:23.291893    4510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 16:00:23.294852    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 16:00:23.300288    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 16:00:23.301393    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 16:00:23.302397    4510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 16:00:23.303510    4510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 16:00:23.308494    4510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 16:00:23.461883    4510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 16:00:23.697931    4510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 16:00:23.698274    4510 kubeadm.go:310] 
	I0831 16:00:23.698312    4510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 16:00:23.698315    4510 kubeadm.go:310] 
	I0831 16:00:23.698352    4510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 16:00:23.698354    4510 kubeadm.go:310] 
	I0831 16:00:23.698367    4510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 16:00:23.698398    4510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 16:00:23.698427    4510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 16:00:23.698430    4510 kubeadm.go:310] 
	I0831 16:00:23.698495    4510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 16:00:23.698500    4510 kubeadm.go:310] 
	I0831 16:00:23.698529    4510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 16:00:23.698532    4510 kubeadm.go:310] 
	I0831 16:00:23.698582    4510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 16:00:23.698630    4510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 16:00:23.698671    4510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 16:00:23.698675    4510 kubeadm.go:310] 
	I0831 16:00:23.698816    4510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 16:00:23.698864    4510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 16:00:23.698871    4510 kubeadm.go:310] 
	I0831 16:00:23.698921    4510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token runj8w.lv76pt49rlz5uirc \
	I0831 16:00:23.698999    4510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 \
	I0831 16:00:23.699015    4510 kubeadm.go:310] 	--control-plane 
	I0831 16:00:23.699017    4510 kubeadm.go:310] 
	I0831 16:00:23.699135    4510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 16:00:23.699143    4510 kubeadm.go:310] 
	I0831 16:00:23.699187    4510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token runj8w.lv76pt49rlz5uirc \
	I0831 16:00:23.699249    4510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2c48eb01c7e14ccabacb40733af33f412e082e4a439e581a52136a5c4cb8c784 
	I0831 16:00:23.699444    4510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 16:00:23.699600    4510 cni.go:84] Creating CNI manager for ""
	I0831 16:00:23.699611    4510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:00:23.702744    4510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 16:00:23.709885    4510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 16:00:23.713613    4510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 16:00:23.719551    4510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 16:00:23.719663    4510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 16:00:23.719719    4510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-231000 minikube.k8s.io/updated_at=2024_08_31T16_00_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=stopped-upgrade-231000 minikube.k8s.io/primary=true
	I0831 16:00:23.733961    4510 ops.go:34] apiserver oom_adj: -16
	I0831 16:00:23.772037    4510 kubeadm.go:1113] duration metric: took 52.460208ms to wait for elevateKubeSystemPrivileges
	I0831 16:00:23.772139    4510 kubeadm.go:394] duration metric: took 4m11.237113417s to StartCluster
	I0831 16:00:23.772155    4510 settings.go:142] acquiring lock: {Name:mk1f68c46dbdc908eb5f421f0b3b97ef9fe18a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:00:23.772255    4510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:00:23.772712    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/kubeconfig: {Name:mk5b293422cb73c677cf2dff78569dba34983db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:00:23.772912    4510 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:00:23.772963    4510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 16:00:23.773028    4510 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-231000"
	I0831 16:00:23.773039    4510 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:00:23.773042    4510 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-231000"
	W0831 16:00:23.773045    4510 addons.go:243] addon storage-provisioner should already be in state true
	I0831 16:00:23.773056    4510 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0831 16:00:23.773072    4510 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-231000"
	I0831 16:00:23.773084    4510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-231000"
	I0831 16:00:23.776853    4510 out.go:177] * Verifying Kubernetes components...
	I0831 16:00:23.777519    4510 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/18943-972/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106638030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 16:00:23.781275    4510 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-231000"
	W0831 16:00:23.781281    4510 addons.go:243] addon default-storageclass should already be in state true
	I0831 16:00:23.781292    4510 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0831 16:00:23.781879    4510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 16:00:23.781885    4510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 16:00:23.781890    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 16:00:23.784811    4510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 16:00:23.788841    4510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 16:00:23.792764    4510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 16:00:23.792770    4510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 16:00:23.792776    4510 sshutil.go:53] new ssh client: &{IP:localhost Port:50489 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0831 16:00:23.859455    4510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 16:00:23.865156    4510 api_server.go:52] waiting for apiserver process to appear ...
	I0831 16:00:23.865205    4510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 16:00:23.868947    4510 api_server.go:72] duration metric: took 96.025958ms to wait for apiserver process to appear ...
	I0831 16:00:23.868955    4510 api_server.go:88] waiting for apiserver healthz status ...
	I0831 16:00:23.868961    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:23.886577    4510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 16:00:23.897246    4510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 16:00:24.229746    4510 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 16:00:24.229758    4510 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 16:00:28.870632    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:28.870703    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:33.870881    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:33.870925    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:38.871062    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:38.871103    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:43.871316    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:43.871340    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:48.871608    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:48.871629    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:00:53.871995    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:53.872021    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0831 16:00:54.231132    4510 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0831 16:00:54.235352    4510 out.go:177] * Enabled addons: storage-provisioner
	I0831 16:00:54.244232    4510 addons.go:510] duration metric: took 30.472170375s for enable addons: enabled=[storage-provisioner]
	I0831 16:00:58.872601    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:00:58.872650    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:03.873880    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:03.873965    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:08.875221    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:08.875343    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:13.876379    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:13.876399    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:18.878099    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:18.878176    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:23.880642    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:23.880718    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:01:23.891507    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:01:23.891550    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:01:23.901925    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:01:23.901981    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:01:23.913519    4510 logs.go:276] 2 containers: [03990584413f 0673708fc3fa]
	I0831 16:01:23.913569    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:01:23.923851    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:01:23.923911    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:01:23.933823    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:01:23.933893    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:01:23.945301    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:01:23.945377    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:01:23.955110    4510 logs.go:276] 0 containers: []
	W0831 16:01:23.955123    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:01:23.955181    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:01:23.965622    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:01:23.965639    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:01:23.965645    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:01:23.976940    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:01:23.976953    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:01:23.991000    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:01:23.991007    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:01:24.013057    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:01:24.013064    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:01:24.017261    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:01:24.017267    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:01:24.032250    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:01:24.032263    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:01:24.043684    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:01:24.043697    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:01:24.059304    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:01:24.059315    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:01:24.074904    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:01:24.074917    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:01:24.098970    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:01:24.098980    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:01:24.110894    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:01:24.110904    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:01:24.146063    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:24.146158    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:24.146654    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:01:24.146659    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:01:24.180507    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:01:24.180521    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:01:24.194800    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:24.194809    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:01:24.194835    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:01:24.194839    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:24.194842    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:24.194845    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:24.194848    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:01:34.198771    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:39.201329    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:39.201733    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:01:39.239719    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:01:39.239853    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:01:39.261033    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:01:39.261143    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:01:39.276371    4510 logs.go:276] 2 containers: [03990584413f 0673708fc3fa]
	I0831 16:01:39.276446    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:01:39.289137    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:01:39.289202    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:01:39.299574    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:01:39.299640    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:01:39.309944    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:01:39.310014    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:01:39.320257    4510 logs.go:276] 0 containers: []
	W0831 16:01:39.320270    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:01:39.320319    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:01:39.331054    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:01:39.331068    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:01:39.331073    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:01:39.364958    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:39.365049    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:39.365518    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:01:39.365522    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:01:39.379285    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:01:39.379297    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:01:39.402801    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:01:39.402812    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:01:39.414036    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:01:39.414047    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:01:39.431872    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:01:39.431882    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:01:39.457026    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:01:39.457033    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:01:39.468275    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:01:39.468285    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:01:39.472541    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:01:39.472546    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:01:39.508490    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:01:39.508502    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:01:39.519878    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:01:39.519891    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:01:39.533577    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:01:39.533589    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:01:39.549094    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:01:39.549107    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:01:39.561595    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:39.561608    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:01:39.561633    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:01:39.561637    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:39.561641    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:39.561663    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:39.561668    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:01:49.565552    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:01:54.567798    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:01:54.568033    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:01:54.593792    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:01:54.593905    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:01:54.612180    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:01:54.612258    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:01:54.625439    4510 logs.go:276] 2 containers: [03990584413f 0673708fc3fa]
	I0831 16:01:54.625514    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:01:54.637006    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:01:54.637070    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:01:54.647450    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:01:54.647513    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:01:54.657841    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:01:54.657911    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:01:54.673562    4510 logs.go:276] 0 containers: []
	W0831 16:01:54.673574    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:01:54.673630    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:01:54.683491    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:01:54.683508    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:01:54.683513    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:01:54.697920    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:01:54.697930    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:01:54.714118    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:01:54.714131    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:01:54.725560    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:01:54.725573    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:01:54.736933    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:01:54.736947    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:01:54.747990    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:01:54.748002    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:01:54.765367    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:01:54.765379    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:01:54.788145    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:01:54.788155    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:01:54.821521    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:54.821613    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:54.822095    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:01:54.822099    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:01:54.826012    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:01:54.826020    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:01:54.860321    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:01:54.860333    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:01:54.874910    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:01:54.874919    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:01:54.885755    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:01:54.885764    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:01:54.897136    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:54.897146    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:01:54.897173    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:01:54.897179    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:01:54.897183    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:01:54.897187    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:01:54.897190    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:04.897942    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:02:09.900681    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:02:09.901096    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:02:09.941599    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:02:09.941710    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:02:09.965299    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:02:09.965395    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:02:09.980936    4510 logs.go:276] 2 containers: [03990584413f 0673708fc3fa]
	I0831 16:02:09.981021    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:02:09.993573    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:02:09.993640    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:02:10.004018    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:02:10.004085    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:02:10.014224    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:02:10.014291    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:02:10.024202    4510 logs.go:276] 0 containers: []
	W0831 16:02:10.024215    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:02:10.024263    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:02:10.034513    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:02:10.034531    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:02:10.034537    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:02:10.047953    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:02:10.047963    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:02:10.059414    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:02:10.059426    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:02:10.071388    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:02:10.071401    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:02:10.090028    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:02:10.090041    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:02:10.113019    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:02:10.113025    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:02:10.145953    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:10.146047    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:10.146515    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:02:10.146519    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:02:10.150546    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:02:10.150555    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:02:10.184801    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:02:10.184814    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:02:10.196804    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:02:10.196813    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:02:10.208564    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:02:10.208578    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:02:10.223170    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:02:10.223180    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:02:10.234748    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:02:10.234758    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:02:10.251817    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:10.251827    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:02:10.251850    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:02:10.251854    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:10.251858    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:10.251861    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:10.251863    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:20.254584    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:02:25.255252    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:02:25.255330    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:02:25.267253    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:02:25.267322    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:02:25.279285    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:02:25.279344    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:02:25.291033    4510 logs.go:276] 2 containers: [03990584413f 0673708fc3fa]
	I0831 16:02:25.291098    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:02:25.303099    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:02:25.303141    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:02:25.314128    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:02:25.314179    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:02:25.326152    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:02:25.326214    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:02:25.337907    4510 logs.go:276] 0 containers: []
	W0831 16:02:25.337918    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:02:25.337959    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:02:25.349055    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:02:25.349069    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:02:25.349075    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:02:25.363277    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:02:25.363291    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:02:25.376850    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:02:25.376863    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:02:25.390562    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:02:25.390572    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:02:25.408263    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:02:25.408274    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:02:25.420593    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:02:25.420607    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:02:25.433094    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:02:25.433104    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:02:25.447726    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:02:25.447739    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:02:25.452586    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:02:25.452597    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:02:25.508759    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:02:25.508772    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:02:25.524446    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:02:25.524458    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:02:25.540848    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:02:25.540860    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:02:25.567153    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:02:25.567172    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:02:25.602803    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:25.602900    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:25.603384    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:25.603389    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:02:25.603411    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:02:25.603423    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:25.603430    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:25.603434    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:25.603438    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:35.607345    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:02:40.609561    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:02:40.609764    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:02:40.634479    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:02:40.634572    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:02:40.650017    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:02:40.650086    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:02:40.662630    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:02:40.662704    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:02:40.673777    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:02:40.673834    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:02:40.684071    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:02:40.684127    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:02:40.694265    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:02:40.694327    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:02:40.704027    4510 logs.go:276] 0 containers: []
	W0831 16:02:40.704038    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:02:40.704089    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:02:40.714242    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:02:40.714260    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:02:40.714265    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:02:40.752128    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:02:40.752140    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:02:40.768009    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:02:40.768020    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:02:40.785578    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:02:40.785588    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:02:40.808997    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:02:40.809004    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:02:40.812825    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:02:40.812833    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:02:40.824685    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:02:40.824698    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:02:40.836064    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:02:40.836077    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:02:40.848318    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:02:40.848329    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:02:40.862197    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:02:40.862207    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:02:40.873250    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:02:40.873262    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:02:40.884732    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:02:40.884746    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:02:40.896191    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:02:40.896203    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:02:40.931753    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:40.931845    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:40.932316    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:02:40.932321    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:02:40.943327    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:02:40.943342    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:02:40.960315    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:40.960326    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:02:40.960350    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:02:40.960356    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:40.960359    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:40.960362    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:40.960382    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:50.962235    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:02:55.964329    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:02:55.964739    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:02:56.001545    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:02:56.001663    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:02:56.022984    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:02:56.023079    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:02:56.038200    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:02:56.038286    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:02:56.050323    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:02:56.050391    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:02:56.076880    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:02:56.076947    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:02:56.087700    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:02:56.087771    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:02:56.097853    4510 logs.go:276] 0 containers: []
	W0831 16:02:56.097865    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:02:56.097917    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:02:56.112342    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:02:56.112360    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:02:56.112366    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:02:56.126126    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:02:56.126136    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:02:56.138302    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:02:56.138315    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:02:56.172381    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:56.172473    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:56.172951    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:02:56.172956    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:02:56.184222    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:02:56.184231    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:02:56.195841    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:02:56.195851    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:02:56.213067    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:02:56.213077    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:02:56.247325    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:02:56.247340    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:02:56.261981    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:02:56.261995    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:02:56.273579    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:02:56.273592    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:02:56.284844    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:02:56.284855    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:02:56.299861    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:02:56.299870    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:02:56.311394    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:02:56.311407    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:02:56.337056    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:02:56.337066    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:02:56.341155    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:02:56.341165    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:02:56.352433    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:56.352445    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:02:56.352474    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:02:56.352478    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:02:56.352483    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:02:56.352486    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:56.352490    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:06.356414    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:03:11.358718    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:03:11.358938    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:03:11.381030    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:03:11.381119    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:03:11.398196    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:03:11.398264    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:03:11.411993    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:03:11.412061    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:03:11.423454    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:03:11.423519    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:03:11.434001    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:03:11.434065    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:03:11.444434    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:03:11.444491    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:03:11.454611    4510 logs.go:276] 0 containers: []
	W0831 16:03:11.454623    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:03:11.454669    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:03:11.473383    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:03:11.473400    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:03:11.473405    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:03:11.485193    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:03:11.485205    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:03:11.518614    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:11.518708    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:11.519179    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:03:11.519183    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:03:11.533973    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:03:11.533984    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:03:11.545514    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:03:11.545527    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:03:11.562732    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:03:11.562742    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:03:11.567572    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:03:11.567580    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:03:11.579561    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:03:11.579573    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:03:11.591363    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:03:11.591376    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:03:11.625468    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:03:11.625480    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:03:11.637618    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:03:11.637631    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:03:11.655109    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:03:11.655119    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:03:11.669208    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:03:11.669220    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:03:11.681555    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:03:11.681569    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:03:11.693253    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:03:11.693264    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:03:11.716497    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:11.716505    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:03:11.716536    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:03:11.716540    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:11.716546    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:11.716564    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:11.716577    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:21.720544    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:03:26.721751    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:03:26.721829    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:03:26.734144    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:03:26.734193    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:03:26.744955    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:03:26.745010    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:03:26.756570    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:03:26.756634    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:03:26.769493    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:03:26.769547    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:03:26.780939    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:03:26.780993    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:03:26.791105    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:03:26.791159    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:03:26.802224    4510 logs.go:276] 0 containers: []
	W0831 16:03:26.802234    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:03:26.802287    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:03:26.821400    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:03:26.821416    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:03:26.821423    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:03:26.837161    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:03:26.837172    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:03:26.856175    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:03:26.856188    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:03:26.861193    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:03:26.861205    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:03:26.875737    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:03:26.875745    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:03:26.888691    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:03:26.888703    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:03:26.913211    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:03:26.913219    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:03:26.928136    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:03:26.928150    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:03:26.941404    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:03:26.941415    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:03:26.954641    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:03:26.954652    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:03:26.966791    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:03:26.966802    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:03:26.984666    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:03:26.984680    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:03:27.012091    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:03:27.012107    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:03:27.024590    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:03:27.024611    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:03:27.061335    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:27.061436    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:27.061936    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:03:27.061946    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:03:27.101321    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:27.101330    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:03:27.101353    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:03:27.101358    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:27.101362    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:27.101367    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:27.101369    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:37.105216    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:03:42.107437    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:03:42.107812    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:03:42.153312    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:03:42.153426    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:03:42.178285    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:03:42.178371    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:03:42.192546    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:03:42.192623    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:03:42.203795    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:03:42.203849    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:03:42.221224    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:03:42.221300    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:03:42.231564    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:03:42.231627    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:03:42.244331    4510 logs.go:276] 0 containers: []
	W0831 16:03:42.244347    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:03:42.244395    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:03:42.255094    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:03:42.255111    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:03:42.255117    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:03:42.289455    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:03:42.289470    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:03:42.315288    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:03:42.315299    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:03:42.344181    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:03:42.344193    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:03:42.378968    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:42.379060    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:42.379528    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:03:42.379532    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:03:42.395480    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:03:42.395493    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:03:42.407283    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:03:42.407296    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:03:42.418805    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:03:42.418819    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:03:42.423322    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:03:42.423330    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:03:42.434812    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:03:42.434822    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:03:42.446451    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:03:42.446463    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:03:42.458476    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:03:42.458489    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:03:42.469929    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:03:42.469942    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:03:42.487468    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:03:42.487478    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:03:42.501511    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:03:42.501522    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:03:42.516560    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:42.516570    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:03:42.516595    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:03:42.516601    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:42.516605    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:42.516609    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:42.516612    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:52.518434    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:03:57.520540    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:03:57.520895    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:03:57.563560    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:03:57.563660    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:03:57.579718    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:03:57.579786    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:03:57.592836    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:03:57.592909    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:03:57.604171    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:03:57.604233    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:03:57.614668    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:03:57.614735    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:03:57.624842    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:03:57.624907    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:03:57.635688    4510 logs.go:276] 0 containers: []
	W0831 16:03:57.635704    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:03:57.635755    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:03:57.646326    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:03:57.646343    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:03:57.646348    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:03:57.657572    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:03:57.657584    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:03:57.668973    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:03:57.668986    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:03:57.673539    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:03:57.673548    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:03:57.687009    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:03:57.687019    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:03:57.698755    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:03:57.698767    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:03:57.710084    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:03:57.710093    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:03:57.727904    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:03:57.727917    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:03:57.750964    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:03:57.750975    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:03:57.786387    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:03:57.786401    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:03:57.798108    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:03:57.798119    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:03:57.813288    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:03:57.813301    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:03:57.825643    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:03:57.825656    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:03:57.858936    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:57.859028    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:57.859497    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:03:57.859501    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:03:57.873711    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:03:57.873720    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:03:57.885491    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:57.885501    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:03:57.885527    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:03:57.885532    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:03:57.885545    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:03:57.885551    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:57.885555    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:07.889510    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:04:12.890541    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:04:12.891014    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0831 16:04:12.930719    4510 logs.go:276] 1 containers: [1104123d8c46]
	I0831 16:04:12.930871    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0831 16:04:12.952833    4510 logs.go:276] 1 containers: [57ba83577a87]
	I0831 16:04:12.952924    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0831 16:04:12.968601    4510 logs.go:276] 4 containers: [ec631acdda87 96f5aafe58b5 03990584413f 0673708fc3fa]
	I0831 16:04:12.968672    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0831 16:04:12.981553    4510 logs.go:276] 1 containers: [4caca7c79420]
	I0831 16:04:12.981636    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0831 16:04:12.992958    4510 logs.go:276] 1 containers: [c466942d2ad1]
	I0831 16:04:12.993015    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0831 16:04:13.003410    4510 logs.go:276] 1 containers: [0bc331470659]
	I0831 16:04:13.003466    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0831 16:04:13.013465    4510 logs.go:276] 0 containers: []
	W0831 16:04:13.013477    4510 logs.go:278] No container was found matching "kindnet"
	I0831 16:04:13.013534    4510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0831 16:04:13.023750    4510 logs.go:276] 1 containers: [aea0a76d72c3]
	I0831 16:04:13.023767    4510 logs.go:123] Gathering logs for kube-apiserver [1104123d8c46] ...
	I0831 16:04:13.023772    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1104123d8c46"
	I0831 16:04:13.038227    4510 logs.go:123] Gathering logs for coredns [ec631acdda87] ...
	I0831 16:04:13.038237    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec631acdda87"
	I0831 16:04:13.049942    4510 logs.go:123] Gathering logs for kube-proxy [c466942d2ad1] ...
	I0831 16:04:13.049954    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c466942d2ad1"
	I0831 16:04:13.061127    4510 logs.go:123] Gathering logs for kube-controller-manager [0bc331470659] ...
	I0831 16:04:13.061137    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bc331470659"
	I0831 16:04:13.078138    4510 logs.go:123] Gathering logs for storage-provisioner [aea0a76d72c3] ...
	I0831 16:04:13.078148    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aea0a76d72c3"
	I0831 16:04:13.089829    4510 logs.go:123] Gathering logs for describe nodes ...
	I0831 16:04:13.089840    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 16:04:13.123906    4510 logs.go:123] Gathering logs for etcd [57ba83577a87] ...
	I0831 16:04:13.123920    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57ba83577a87"
	I0831 16:04:13.137493    4510 logs.go:123] Gathering logs for coredns [96f5aafe58b5] ...
	I0831 16:04:13.137504    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96f5aafe58b5"
	I0831 16:04:13.149194    4510 logs.go:123] Gathering logs for coredns [0673708fc3fa] ...
	I0831 16:04:13.149208    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0673708fc3fa"
	I0831 16:04:13.183067    4510 logs.go:123] Gathering logs for Docker ...
	I0831 16:04:13.183076    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0831 16:04:13.209703    4510 logs.go:123] Gathering logs for kubelet ...
	I0831 16:04:13.209718    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 16:04:13.245547    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:04:13.245645    4510 logs.go:138] Found kubelet problem: Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:04:13.246148    4510 logs.go:123] Gathering logs for dmesg ...
	I0831 16:04:13.246156    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 16:04:13.250864    4510 logs.go:123] Gathering logs for container status ...
	I0831 16:04:13.250875    4510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 16:04:13.262815    4510 logs.go:123] Gathering logs for coredns [03990584413f] ...
	I0831 16:04:13.262828    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03990584413f"
	I0831 16:04:13.275767    4510 logs.go:123] Gathering logs for kube-scheduler [4caca7c79420] ...
	I0831 16:04:13.275782    4510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4caca7c79420"
	I0831 16:04:13.293796    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:13.293810    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 16:04:13.293833    4510 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0831 16:04:13.293838    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: W0831 23:00:37.920253   10500 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	W0831 16:04:13.293842    4510 out.go:270]   Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	  Aug 31 23:00:37 stopped-upgrade-231000 kubelet[10500]: E0831 23:00:37.920299   10500 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-231000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-231000' and this object
	I0831 16:04:13.293845    4510 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:13.293848    4510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:23.297822    4510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0831 16:04:28.300387    4510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0831 16:04:28.306310    4510 out.go:201] 
	W0831 16:04:28.310196    4510 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0831 16:04:28.310216    4510 out.go:270] * 
	* 
	W0831 16:04:28.311953    4510 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:28.325177    4510 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (591.24s)

                                                
                                    
x
+
TestPause/serial/Start (10.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-446000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-446000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.078042291s)

                                                
                                                
-- stdout --
	* [pause-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-446000" primary control-plane node in "pause-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-446000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-446000 -n pause-446000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-446000 -n pause-446000: exit status 7 (30.709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "pause-446000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 : exit status 80 (9.848100958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-246000" primary control-plane node in "NoKubernetes-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000: exit status 7 (30.349292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "NoKubernetes-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 : exit status 80 (5.253769375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-246000
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000: exit status 7 (62.618125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "NoKubernetes-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 : exit status 80 (5.246563166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-246000
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000: exit status 7 (65.811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "NoKubernetes-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 : exit status 80 (5.28344525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-246000
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-246000 --driver=qemu2 " : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-246000 -n NoKubernetes-246000: exit status 7 (68.741291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "NoKubernetes-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.8970805s)

                                                
                                                
-- stdout --
	* [auto-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-571000" primary control-plane node in "auto-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:02:14.357119    5754 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:02:14.357259    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:14.357263    5754 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:14.357265    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:14.357385    5754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:02:14.358535    5754 out.go:352] Setting JSON to false
	I0831 16:02:14.374980    5754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3698,"bootTime":1725141636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:02:14.375057    5754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:02:14.382542    5754 out.go:177] * [auto-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:02:14.390379    5754 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:02:14.390435    5754 notify.go:220] Checking for updates...
	I0831 16:02:14.394311    5754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:02:14.397320    5754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:02:14.401190    5754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:02:14.404252    5754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:02:14.407302    5754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:02:14.410728    5754 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:02:14.410794    5754 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:02:14.410838    5754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:02:14.415241    5754 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:02:14.422345    5754 start.go:297] selected driver: qemu2
	I0831 16:02:14.422354    5754 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:02:14.422361    5754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:02:14.424782    5754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:02:14.427221    5754 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:02:14.430411    5754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:02:14.430434    5754 cni.go:84] Creating CNI manager for ""
	I0831 16:02:14.430444    5754 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:02:14.430449    5754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:02:14.430494    5754 start.go:340] cluster config:
	{Name:auto-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:02:14.434367    5754 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:02:14.442315    5754 out.go:177] * Starting "auto-571000" primary control-plane node in "auto-571000" cluster
	I0831 16:02:14.445266    5754 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:02:14.445280    5754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:02:14.445290    5754 cache.go:56] Caching tarball of preloaded images
	I0831 16:02:14.445352    5754 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:02:14.445357    5754 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:02:14.445403    5754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/auto-571000/config.json ...
	I0831 16:02:14.445414    5754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/auto-571000/config.json: {Name:mkd26ab292e40099caf6f513e5ca5cb28e5214cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:02:14.445805    5754 start.go:360] acquireMachinesLock for auto-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:14.445835    5754 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "auto-571000"
	I0831 16:02:14.445844    5754 start.go:93] Provisioning new machine with config: &{Name:auto-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:14.445872    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:14.453260    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:14.468373    5754 start.go:159] libmachine.API.Create for "auto-571000" (driver="qemu2")
	I0831 16:02:14.468408    5754 client.go:168] LocalClient.Create starting
	I0831 16:02:14.468471    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:14.468505    5754 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:14.468517    5754 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:14.468564    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:14.468586    5754 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:14.468596    5754 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:14.469126    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:14.633788    5754 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:14.776594    5754 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:14.776602    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:14.776822    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:14.786547    5754 main.go:141] libmachine: STDOUT: 
	I0831 16:02:14.786574    5754 main.go:141] libmachine: STDERR: 
	I0831 16:02:14.786628    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2 +20000M
	I0831 16:02:14.794488    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:14.794505    5754 main.go:141] libmachine: STDERR: 
	I0831 16:02:14.794518    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:14.794522    5754 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:14.794532    5754 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:14.794559    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:55:9a:8e:39:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:14.796176    5754 main.go:141] libmachine: STDOUT: 
	I0831 16:02:14.796194    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:14.796213    5754 client.go:171] duration metric: took 327.810167ms to LocalClient.Create
	I0831 16:02:16.798376    5754 start.go:128] duration metric: took 2.352544459s to createHost
	I0831 16:02:16.798491    5754 start.go:83] releasing machines lock for "auto-571000", held for 2.352714958s
	W0831 16:02:16.798549    5754 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:16.814851    5754 out.go:177] * Deleting "auto-571000" in qemu2 ...
	W0831 16:02:16.852266    5754 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:16.852296    5754 start.go:729] Will try again in 5 seconds ...
	I0831 16:02:21.854381    5754 start.go:360] acquireMachinesLock for auto-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:21.854651    5754 start.go:364] duration metric: took 200.292µs to acquireMachinesLock for "auto-571000"
	I0831 16:02:21.854709    5754 start.go:93] Provisioning new machine with config: &{Name:auto-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:21.854862    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:21.863247    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:21.890322    5754 start.go:159] libmachine.API.Create for "auto-571000" (driver="qemu2")
	I0831 16:02:21.890357    5754 client.go:168] LocalClient.Create starting
	I0831 16:02:21.890443    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:21.890498    5754 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:21.890513    5754 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:21.890555    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:21.890586    5754 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:21.890594    5754 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:21.890969    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:22.052978    5754 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:22.152936    5754 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:22.152945    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:22.153179    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:22.162645    5754 main.go:141] libmachine: STDOUT: 
	I0831 16:02:22.162670    5754 main.go:141] libmachine: STDERR: 
	I0831 16:02:22.162714    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2 +20000M
	I0831 16:02:22.170603    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:22.170620    5754 main.go:141] libmachine: STDERR: 
	I0831 16:02:22.170630    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:22.170634    5754 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:22.170640    5754 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:22.170669    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ae:74:06:04:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/auto-571000/disk.qcow2
	I0831 16:02:22.172262    5754 main.go:141] libmachine: STDOUT: 
	I0831 16:02:22.172280    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:22.172292    5754 client.go:171] duration metric: took 281.939416ms to LocalClient.Create
	I0831 16:02:24.174457    5754 start.go:128] duration metric: took 2.319616916s to createHost
	I0831 16:02:24.174610    5754 start.go:83] releasing machines lock for "auto-571000", held for 2.319971375s
	W0831 16:02:24.174945    5754 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:24.191620    5754 out.go:201] 
	W0831 16:02:24.195730    5754 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:02:24.195792    5754 out.go:270] * 
	* 
	W0831 16:02:24.198209    5754 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:02:24.211624    5754 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0831 16:02:32.376427    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.979385292s)

                                                
                                                
-- stdout --
	* [kindnet-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-571000" primary control-plane node in "kindnet-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:02:26.395281    5871 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:02:26.395408    5871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:26.395412    5871 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:26.395414    5871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:26.395560    5871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:02:26.396676    5871 out.go:352] Setting JSON to false
	I0831 16:02:26.413276    5871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3710,"bootTime":1725141636,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:02:26.413431    5871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:02:26.420495    5871 out.go:177] * [kindnet-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:02:26.427512    5871 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:02:26.427572    5871 notify.go:220] Checking for updates...
	I0831 16:02:26.435430    5871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:02:26.438432    5871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:02:26.442412    5871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:02:26.445411    5871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:02:26.448410    5871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:02:26.451697    5871 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:02:26.451762    5871 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:02:26.451813    5871 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:02:26.455432    5871 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:02:26.462410    5871 start.go:297] selected driver: qemu2
	I0831 16:02:26.462416    5871 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:02:26.462422    5871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:02:26.464874    5871 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:02:26.469424    5871 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:02:26.472423    5871 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:02:26.472463    5871 cni.go:84] Creating CNI manager for "kindnet"
	I0831 16:02:26.472467    5871 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 16:02:26.472495    5871 start.go:340] cluster config:
	{Name:kindnet-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:02:26.476021    5871 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:02:26.483419    5871 out.go:177] * Starting "kindnet-571000" primary control-plane node in "kindnet-571000" cluster
	I0831 16:02:26.487445    5871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:02:26.487476    5871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:02:26.487488    5871 cache.go:56] Caching tarball of preloaded images
	I0831 16:02:26.487574    5871 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:02:26.487581    5871 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:02:26.487652    5871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kindnet-571000/config.json ...
	I0831 16:02:26.487665    5871 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kindnet-571000/config.json: {Name:mk3d53bcf495617929eadf2e64018bde9b1d85eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:02:26.488001    5871 start.go:360] acquireMachinesLock for kindnet-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:26.488037    5871 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "kindnet-571000"
	I0831 16:02:26.488048    5871 start.go:93] Provisioning new machine with config: &{Name:kindnet-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:26.488075    5871 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:26.491438    5871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:26.507210    5871 start.go:159] libmachine.API.Create for "kindnet-571000" (driver="qemu2")
	I0831 16:02:26.507240    5871 client.go:168] LocalClient.Create starting
	I0831 16:02:26.507306    5871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:26.507339    5871 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:26.507346    5871 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:26.507386    5871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:26.507409    5871 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:26.507419    5871 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:26.507766    5871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:26.670470    5871 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:26.773501    5871 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:26.773507    5871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:26.773734    5871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:26.782990    5871 main.go:141] libmachine: STDOUT: 
	I0831 16:02:26.783009    5871 main.go:141] libmachine: STDERR: 
	I0831 16:02:26.783059    5871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2 +20000M
	I0831 16:02:26.790944    5871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:26.790961    5871 main.go:141] libmachine: STDERR: 
	I0831 16:02:26.790975    5871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:26.790981    5871 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:26.790991    5871 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:26.791026    5871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:34:78:c0:a9:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:26.792634    5871 main.go:141] libmachine: STDOUT: 
	I0831 16:02:26.792649    5871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:26.792668    5871 client.go:171] duration metric: took 285.431583ms to LocalClient.Create
	I0831 16:02:28.794788    5871 start.go:128] duration metric: took 2.306754917s to createHost
	I0831 16:02:28.794843    5871 start.go:83] releasing machines lock for "kindnet-571000", held for 2.306866125s
	W0831 16:02:28.794893    5871 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:28.810544    5871 out.go:177] * Deleting "kindnet-571000" in qemu2 ...
	W0831 16:02:28.836098    5871 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:28.836112    5871 start.go:729] Will try again in 5 seconds ...
	I0831 16:02:33.838113    5871 start.go:360] acquireMachinesLock for kindnet-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:33.838408    5871 start.go:364] duration metric: took 225.917µs to acquireMachinesLock for "kindnet-571000"
	I0831 16:02:33.838469    5871 start.go:93] Provisioning new machine with config: &{Name:kindnet-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:33.838652    5871 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:33.843019    5871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:33.876449    5871 start.go:159] libmachine.API.Create for "kindnet-571000" (driver="qemu2")
	I0831 16:02:33.876509    5871 client.go:168] LocalClient.Create starting
	I0831 16:02:33.876611    5871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:33.876672    5871 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:33.876686    5871 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:33.876745    5871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:33.876781    5871 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:33.876794    5871 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:33.877276    5871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:34.045674    5871 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:34.286435    5871 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:34.286446    5871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:34.286725    5871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:34.296596    5871 main.go:141] libmachine: STDOUT: 
	I0831 16:02:34.296618    5871 main.go:141] libmachine: STDERR: 
	I0831 16:02:34.296670    5871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2 +20000M
	I0831 16:02:34.304841    5871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:34.304856    5871 main.go:141] libmachine: STDERR: 
	I0831 16:02:34.304869    5871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:34.304873    5871 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:34.304882    5871 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:34.304923    5871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:52:7e:fc:05:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kindnet-571000/disk.qcow2
	I0831 16:02:34.306606    5871 main.go:141] libmachine: STDOUT: 
	I0831 16:02:34.306622    5871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:34.306634    5871 client.go:171] duration metric: took 430.1315ms to LocalClient.Create
	I0831 16:02:36.308700    5871 start.go:128] duration metric: took 2.47009925s to createHost
	I0831 16:02:36.308740    5871 start.go:83] releasing machines lock for "kindnet-571000", held for 2.470385167s
	W0831 16:02:36.308922    5871 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:36.317209    5871 out.go:201] 
	W0831 16:02:36.321091    5871 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:02:36.321102    5871 out.go:270] * 
	* 
	W0831 16:02:36.322096    5871 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:02:36.339203    5871 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.949192291s)

                                                
                                                
-- stdout --
	* [calico-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-571000" primary control-plane node in "calico-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:02:38.582352    5986 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:02:38.582470    5986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:38.582473    5986 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:38.582476    5986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:38.582601    5986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:02:38.583668    5986 out.go:352] Setting JSON to false
	I0831 16:02:38.600609    5986 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3722,"bootTime":1725141636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:02:38.600689    5986 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:02:38.608288    5986 out.go:177] * [calico-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:02:38.616139    5986 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:02:38.616192    5986 notify.go:220] Checking for updates...
	I0831 16:02:38.624052    5986 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:02:38.627102    5986 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:02:38.630015    5986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:02:38.633091    5986 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:02:38.636090    5986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:02:38.637999    5986 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:02:38.638066    5986 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:02:38.638107    5986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:02:38.641034    5986 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:02:38.647942    5986 start.go:297] selected driver: qemu2
	I0831 16:02:38.647950    5986 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:02:38.647956    5986 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:02:38.650276    5986 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:02:38.653067    5986 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:02:38.656165    5986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:02:38.656217    5986 cni.go:84] Creating CNI manager for "calico"
	I0831 16:02:38.656222    5986 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0831 16:02:38.656254    5986 start.go:340] cluster config:
	{Name:calico-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:02:38.659839    5986 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:02:38.668092    5986 out.go:177] * Starting "calico-571000" primary control-plane node in "calico-571000" cluster
	I0831 16:02:38.672143    5986 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:02:38.672159    5986 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:02:38.672172    5986 cache.go:56] Caching tarball of preloaded images
	I0831 16:02:38.672232    5986 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:02:38.672238    5986 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:02:38.672303    5986 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/calico-571000/config.json ...
	I0831 16:02:38.672315    5986 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/calico-571000/config.json: {Name:mk585620e6ab4646294ef137765f8de767b2e412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:02:38.672528    5986 start.go:360] acquireMachinesLock for calico-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:38.672560    5986 start.go:364] duration metric: took 26.541µs to acquireMachinesLock for "calico-571000"
	I0831 16:02:38.672569    5986 start.go:93] Provisioning new machine with config: &{Name:calico-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:38.672601    5986 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:38.681056    5986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:38.696546    5986 start.go:159] libmachine.API.Create for "calico-571000" (driver="qemu2")
	I0831 16:02:38.696574    5986 client.go:168] LocalClient.Create starting
	I0831 16:02:38.696635    5986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:38.696665    5986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:38.696673    5986 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:38.696714    5986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:38.696741    5986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:38.696746    5986 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:38.697084    5986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:38.860203    5986 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:39.086166    5986 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:39.086175    5986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:39.086418    5986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:39.096218    5986 main.go:141] libmachine: STDOUT: 
	I0831 16:02:39.096236    5986 main.go:141] libmachine: STDERR: 
	I0831 16:02:39.096288    5986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2 +20000M
	I0831 16:02:39.104432    5986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:39.104454    5986 main.go:141] libmachine: STDERR: 
	I0831 16:02:39.104473    5986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:39.104479    5986 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:39.104489    5986 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:39.104520    5986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:cf:30:62:fc:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:39.106201    5986 main.go:141] libmachine: STDOUT: 
	I0831 16:02:39.106216    5986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:39.106238    5986 client.go:171] duration metric: took 409.66925ms to LocalClient.Create
	I0831 16:02:41.107666    5986 start.go:128] duration metric: took 2.435128958s to createHost
	I0831 16:02:41.107677    5986 start.go:83] releasing machines lock for "calico-571000", held for 2.435183541s
	W0831 16:02:41.107707    5986 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:41.116383    5986 out.go:177] * Deleting "calico-571000" in qemu2 ...
	W0831 16:02:41.129911    5986 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:41.129919    5986 start.go:729] Will try again in 5 seconds ...
	I0831 16:02:46.131270    5986 start.go:360] acquireMachinesLock for calico-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:46.131865    5986 start.go:364] duration metric: took 453.166µs to acquireMachinesLock for "calico-571000"
	I0831 16:02:46.132004    5986 start.go:93] Provisioning new machine with config: &{Name:calico-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:46.132285    5986 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:46.136774    5986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:46.179763    5986 start.go:159] libmachine.API.Create for "calico-571000" (driver="qemu2")
	I0831 16:02:46.179813    5986 client.go:168] LocalClient.Create starting
	I0831 16:02:46.179927    5986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:46.179993    5986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:46.180007    5986 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:46.180061    5986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:46.180100    5986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:46.180112    5986 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:46.180585    5986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:46.352365    5986 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:46.434289    5986 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:46.434298    5986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:46.434485    5986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:46.443897    5986 main.go:141] libmachine: STDOUT: 
	I0831 16:02:46.443916    5986 main.go:141] libmachine: STDERR: 
	I0831 16:02:46.443966    5986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2 +20000M
	I0831 16:02:46.452003    5986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:46.452023    5986 main.go:141] libmachine: STDERR: 
	I0831 16:02:46.452035    5986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:46.452038    5986 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:46.452050    5986 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:46.452075    5986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2f:5a:23:36:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/calico-571000/disk.qcow2
	I0831 16:02:46.453762    5986 main.go:141] libmachine: STDOUT: 
	I0831 16:02:46.453778    5986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:46.453790    5986 client.go:171] duration metric: took 273.980542ms to LocalClient.Create
	I0831 16:02:48.455922    5986 start.go:128] duration metric: took 2.323673458s to createHost
	I0831 16:02:48.455984    5986 start.go:83] releasing machines lock for "calico-571000", held for 2.324165791s
	W0831 16:02:48.456428    5986 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:48.471221    5986 out.go:201] 
	W0831 16:02:48.474331    5986 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:02:48.474361    5986 out.go:270] * 
	* 
	W0831 16:02:48.475996    5986 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:02:48.490120    5986 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.958916917s)

                                                
                                                
-- stdout --
	* [custom-flannel-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-571000" primary control-plane node in "custom-flannel-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:02:50.907777    6107 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:02:50.908159    6107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:50.908163    6107 out.go:358] Setting ErrFile to fd 2...
	I0831 16:02:50.908165    6107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:02:50.908317    6107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:02:50.912453    6107 out.go:352] Setting JSON to false
	I0831 16:02:50.928952    6107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3734,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:02:50.929028    6107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:02:50.933978    6107 out.go:177] * [custom-flannel-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:02:50.939777    6107 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:02:50.939822    6107 notify.go:220] Checking for updates...
	I0831 16:02:50.946839    6107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:02:50.949794    6107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:02:50.952836    6107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:02:50.955860    6107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:02:50.958824    6107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:02:50.962139    6107 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:02:50.962204    6107 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:02:50.962245    6107 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:02:50.965806    6107 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:02:50.972820    6107 start.go:297] selected driver: qemu2
	I0831 16:02:50.972826    6107 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:02:50.972832    6107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:02:50.975101    6107 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:02:50.977831    6107 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:02:50.980861    6107 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:02:50.980878    6107 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0831 16:02:50.980888    6107 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0831 16:02:50.980914    6107 start.go:340] cluster config:
	{Name:custom-flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:02:50.984213    6107 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:02:50.992875    6107 out.go:177] * Starting "custom-flannel-571000" primary control-plane node in "custom-flannel-571000" cluster
	I0831 16:02:50.996796    6107 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:02:50.996807    6107 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:02:50.996816    6107 cache.go:56] Caching tarball of preloaded images
	I0831 16:02:50.996878    6107 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:02:50.996883    6107 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:02:50.996941    6107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/custom-flannel-571000/config.json ...
	I0831 16:02:50.996950    6107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/custom-flannel-571000/config.json: {Name:mkc85da726185c17e4dcb25326ea9926b20ecf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:02:50.997159    6107 start.go:360] acquireMachinesLock for custom-flannel-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:50.997194    6107 start.go:364] duration metric: took 26.667µs to acquireMachinesLock for "custom-flannel-571000"
	I0831 16:02:50.997203    6107 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:50.997228    6107 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:51.003837    6107 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:51.019056    6107 start.go:159] libmachine.API.Create for "custom-flannel-571000" (driver="qemu2")
	I0831 16:02:51.019079    6107 client.go:168] LocalClient.Create starting
	I0831 16:02:51.019143    6107 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:51.019174    6107 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:51.019184    6107 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:51.019220    6107 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:51.019243    6107 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:51.019256    6107 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:51.019617    6107 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:51.179424    6107 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:51.274439    6107 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:51.274445    6107 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:51.274646    6107 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:51.284174    6107 main.go:141] libmachine: STDOUT: 
	I0831 16:02:51.284193    6107 main.go:141] libmachine: STDERR: 
	I0831 16:02:51.284236    6107 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2 +20000M
	I0831 16:02:51.292344    6107 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:51.292364    6107 main.go:141] libmachine: STDERR: 
	I0831 16:02:51.292377    6107 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:51.292381    6107 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:51.292395    6107 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:51.292425    6107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:21:9f:b9:c6:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:51.294088    6107 main.go:141] libmachine: STDOUT: 
	I0831 16:02:51.294104    6107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:51.294124    6107 client.go:171] duration metric: took 275.04925ms to LocalClient.Create
	I0831 16:02:53.296274    6107 start.go:128] duration metric: took 2.299084167s to createHost
	I0831 16:02:53.296347    6107 start.go:83] releasing machines lock for "custom-flannel-571000", held for 2.299210542s
	W0831 16:02:53.296452    6107 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:53.309954    6107 out.go:177] * Deleting "custom-flannel-571000" in qemu2 ...
	W0831 16:02:53.342257    6107 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:02:53.342285    6107 start.go:729] Will try again in 5 seconds ...
	I0831 16:02:58.344328    6107 start.go:360] acquireMachinesLock for custom-flannel-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:02:58.344871    6107 start.go:364] duration metric: took 425.333µs to acquireMachinesLock for "custom-flannel-571000"
	I0831 16:02:58.345093    6107 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:02:58.345450    6107 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:02:58.353959    6107 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:02:58.401931    6107 start.go:159] libmachine.API.Create for "custom-flannel-571000" (driver="qemu2")
	I0831 16:02:58.401982    6107 client.go:168] LocalClient.Create starting
	I0831 16:02:58.402098    6107 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:02:58.402159    6107 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:58.402182    6107 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:58.402247    6107 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:02:58.402295    6107 main.go:141] libmachine: Decoding PEM data...
	I0831 16:02:58.402308    6107 main.go:141] libmachine: Parsing certificate...
	I0831 16:02:58.403050    6107 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:02:58.571740    6107 main.go:141] libmachine: Creating SSH key...
	I0831 16:02:58.776175    6107 main.go:141] libmachine: Creating Disk image...
	I0831 16:02:58.776187    6107 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:02:58.776399    6107 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:58.785761    6107 main.go:141] libmachine: STDOUT: 
	I0831 16:02:58.785788    6107 main.go:141] libmachine: STDERR: 
	I0831 16:02:58.785861    6107 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2 +20000M
	I0831 16:02:58.793866    6107 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:02:58.793879    6107 main.go:141] libmachine: STDERR: 
	I0831 16:02:58.793894    6107 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:58.793900    6107 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:02:58.793917    6107 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:02:58.793951    6107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a4:8d:bb:e1:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/custom-flannel-571000/disk.qcow2
	I0831 16:02:58.795575    6107 main.go:141] libmachine: STDOUT: 
	I0831 16:02:58.795590    6107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:02:58.795603    6107 client.go:171] duration metric: took 393.626542ms to LocalClient.Create
	I0831 16:03:00.797664    6107 start.go:128] duration metric: took 2.452265917s to createHost
	I0831 16:03:00.797697    6107 start.go:83] releasing machines lock for "custom-flannel-571000", held for 2.45284825s
	W0831 16:03:00.797832    6107 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:00.805802    6107 out.go:201] 
	W0831 16:03:00.816953    6107 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:03:00.816960    6107 out.go:270] * 
	* 
	W0831 16:03:00.817751    6107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:03:00.828852    6107 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.825763709s)

                                                
                                                
-- stdout --
	* [false-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-571000" primary control-plane node in "false-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:03:03.186290    6224 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:03:03.186411    6224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:03.186414    6224 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:03.186416    6224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:03.186524    6224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:03:03.187642    6224 out.go:352] Setting JSON to false
	I0831 16:03:03.204015    6224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3747,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:03:03.204081    6224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:03:03.210488    6224 out.go:177] * [false-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:03:03.218368    6224 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:03:03.218425    6224 notify.go:220] Checking for updates...
	I0831 16:03:03.227339    6224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:03:03.230285    6224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:03:03.234352    6224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:03:03.237340    6224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:03:03.240288    6224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:03:03.243596    6224 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:03:03.243663    6224 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:03:03.243710    6224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:03:03.247380    6224 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:03:03.254298    6224 start.go:297] selected driver: qemu2
	I0831 16:03:03.254304    6224 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:03:03.254310    6224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:03:03.256546    6224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:03:03.259384    6224 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:03:03.263381    6224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:03:03.263396    6224 cni.go:84] Creating CNI manager for "false"
	I0831 16:03:03.263421    6224 start.go:340] cluster config:
	{Name:false-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:03:03.267103    6224 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:03:03.275324    6224 out.go:177] * Starting "false-571000" primary control-plane node in "false-571000" cluster
	I0831 16:03:03.279299    6224 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:03:03.279311    6224 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:03:03.279319    6224 cache.go:56] Caching tarball of preloaded images
	I0831 16:03:03.279374    6224 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:03:03.279380    6224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:03:03.279431    6224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/false-571000/config.json ...
	I0831 16:03:03.279440    6224 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/false-571000/config.json: {Name:mk380e53c23ed9223f4bc00614b7227e5c47a264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:03:03.279852    6224 start.go:360] acquireMachinesLock for false-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:03.279884    6224 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "false-571000"
	I0831 16:03:03.279896    6224 start.go:93] Provisioning new machine with config: &{Name:false-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:03.279922    6224 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:03.288291    6224 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:03.304747    6224 start.go:159] libmachine.API.Create for "false-571000" (driver="qemu2")
	I0831 16:03:03.304775    6224 client.go:168] LocalClient.Create starting
	I0831 16:03:03.304839    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:03.304874    6224 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:03.304884    6224 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:03.304924    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:03.304948    6224 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:03.304956    6224 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:03.305433    6224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:03.464590    6224 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:03.496836    6224 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:03.496841    6224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:03.497034    6224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:03.506264    6224 main.go:141] libmachine: STDOUT: 
	I0831 16:03:03.506282    6224 main.go:141] libmachine: STDERR: 
	I0831 16:03:03.506333    6224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2 +20000M
	I0831 16:03:03.514708    6224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:03.514726    6224 main.go:141] libmachine: STDERR: 
	I0831 16:03:03.514748    6224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:03.514753    6224 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:03.514766    6224 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:03.514793    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:d3:0d:8d:13:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:03.516469    6224 main.go:141] libmachine: STDOUT: 
	I0831 16:03:03.516487    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:03.516524    6224 client.go:171] duration metric: took 211.750416ms to LocalClient.Create
	I0831 16:03:05.518660    6224 start.go:128] duration metric: took 2.238776958s to createHost
	I0831 16:03:05.518761    6224 start.go:83] releasing machines lock for "false-571000", held for 2.238933583s
	W0831 16:03:05.518812    6224 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:05.528569    6224 out.go:177] * Deleting "false-571000" in qemu2 ...
	W0831 16:03:05.556035    6224 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:05.556054    6224 start.go:729] Will try again in 5 seconds ...
	I0831 16:03:10.558191    6224 start.go:360] acquireMachinesLock for false-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:10.558673    6224 start.go:364] duration metric: took 373.334µs to acquireMachinesLock for "false-571000"
	I0831 16:03:10.558832    6224 start.go:93] Provisioning new machine with config: &{Name:false-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:10.559087    6224 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:10.567676    6224 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:10.607108    6224 start.go:159] libmachine.API.Create for "false-571000" (driver="qemu2")
	I0831 16:03:10.607160    6224 client.go:168] LocalClient.Create starting
	I0831 16:03:10.607258    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:10.607334    6224 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:10.607356    6224 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:10.607428    6224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:10.607471    6224 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:10.607484    6224 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:10.608029    6224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:10.777520    6224 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:10.924577    6224 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:10.924586    6224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:10.925033    6224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:10.934284    6224 main.go:141] libmachine: STDOUT: 
	I0831 16:03:10.934305    6224 main.go:141] libmachine: STDERR: 
	I0831 16:03:10.934355    6224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2 +20000M
	I0831 16:03:10.942497    6224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:10.942512    6224 main.go:141] libmachine: STDERR: 
	I0831 16:03:10.942531    6224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:10.942536    6224 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:10.942547    6224 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:10.942581    6224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:04:be:73:4f:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/false-571000/disk.qcow2
	I0831 16:03:10.944206    6224 main.go:141] libmachine: STDOUT: 
	I0831 16:03:10.944223    6224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:10.944234    6224 client.go:171] duration metric: took 337.079292ms to LocalClient.Create
	I0831 16:03:12.946306    6224 start.go:128] duration metric: took 2.387263875s to createHost
	I0831 16:03:12.946366    6224 start.go:83] releasing machines lock for "false-571000", held for 2.387730292s
	W0831 16:03:12.946570    6224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:12.954943    6224 out.go:201] 
	W0831 16:03:12.958937    6224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:03:12.958955    6224 out.go:270] * 
	* 
	W0831 16:03:12.960594    6224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:03:12.970850    6224 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.042630417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-571000" primary control-plane node in "enable-default-cni-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:03:15.155212    6337 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:03:15.155332    6337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:15.155335    6337 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:15.155345    6337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:15.155494    6337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:03:15.156524    6337 out.go:352] Setting JSON to false
	I0831 16:03:15.172901    6337 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3759,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:03:15.172968    6337 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:03:15.178862    6337 out.go:177] * [enable-default-cni-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:03:15.185726    6337 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:03:15.185769    6337 notify.go:220] Checking for updates...
	I0831 16:03:15.194707    6337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:03:15.198678    6337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:03:15.201745    6337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:03:15.205769    6337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:03:15.208778    6337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:03:15.213051    6337 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:03:15.213115    6337 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:03:15.213163    6337 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:03:15.214459    6337 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:03:15.221766    6337 start.go:297] selected driver: qemu2
	I0831 16:03:15.221773    6337 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:03:15.221780    6337 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:03:15.224083    6337 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:03:15.228601    6337 out.go:177] * Automatically selected the socket_vmnet network
	E0831 16:03:15.231803    6337 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0831 16:03:15.231822    6337 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:03:15.231837    6337 cni.go:84] Creating CNI manager for "bridge"
	I0831 16:03:15.231845    6337 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:03:15.231881    6337 start.go:340] cluster config:
	{Name:enable-default-cni-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:03:15.235485    6337 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:03:15.243697    6337 out.go:177] * Starting "enable-default-cni-571000" primary control-plane node in "enable-default-cni-571000" cluster
	I0831 16:03:15.247761    6337 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:03:15.247778    6337 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:03:15.247793    6337 cache.go:56] Caching tarball of preloaded images
	I0831 16:03:15.247860    6337 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:03:15.247866    6337 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:03:15.247935    6337 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/enable-default-cni-571000/config.json ...
	I0831 16:03:15.247947    6337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/enable-default-cni-571000/config.json: {Name:mk8d1319b391aa56e0b785bbf2ad6855099b6773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:03:15.248174    6337 start.go:360] acquireMachinesLock for enable-default-cni-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:15.248210    6337 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "enable-default-cni-571000"
	I0831 16:03:15.248221    6337 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:15.248254    6337 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:15.256758    6337 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:15.273899    6337 start.go:159] libmachine.API.Create for "enable-default-cni-571000" (driver="qemu2")
	I0831 16:03:15.273943    6337 client.go:168] LocalClient.Create starting
	I0831 16:03:15.274008    6337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:15.274047    6337 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:15.274063    6337 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:15.274108    6337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:15.274134    6337 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:15.274154    6337 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:15.274545    6337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:15.434968    6337 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:15.600986    6337 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:15.600996    6337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:15.601223    6337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:15.611033    6337 main.go:141] libmachine: STDOUT: 
	I0831 16:03:15.611065    6337 main.go:141] libmachine: STDERR: 
	I0831 16:03:15.611113    6337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2 +20000M
	I0831 16:03:15.619301    6337 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:15.619321    6337 main.go:141] libmachine: STDERR: 
	I0831 16:03:15.619333    6337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:15.619337    6337 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:15.619347    6337 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:15.619372    6337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ce:a4:fc:79:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:15.621124    6337 main.go:141] libmachine: STDOUT: 
	I0831 16:03:15.621139    6337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:15.621162    6337 client.go:171] duration metric: took 347.225417ms to LocalClient.Create
	I0831 16:03:17.623228    6337 start.go:128] duration metric: took 2.375028583s to createHost
	I0831 16:03:17.623301    6337 start.go:83] releasing machines lock for "enable-default-cni-571000", held for 2.375152834s
	W0831 16:03:17.623359    6337 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:17.633848    6337 out.go:177] * Deleting "enable-default-cni-571000" in qemu2 ...
	W0831 16:03:17.666828    6337 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:17.666842    6337 start.go:729] Will try again in 5 seconds ...
	I0831 16:03:22.668976    6337 start.go:360] acquireMachinesLock for enable-default-cni-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:22.669653    6337 start.go:364] duration metric: took 549.958µs to acquireMachinesLock for "enable-default-cni-571000"
	I0831 16:03:22.669727    6337 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:22.669982    6337 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:22.676696    6337 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:22.727444    6337 start.go:159] libmachine.API.Create for "enable-default-cni-571000" (driver="qemu2")
	I0831 16:03:22.727506    6337 client.go:168] LocalClient.Create starting
	I0831 16:03:22.727635    6337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:22.727696    6337 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:22.727711    6337 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:22.727779    6337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:22.727826    6337 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:22.727845    6337 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:22.728391    6337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:22.898501    6337 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:23.108265    6337 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:23.108278    6337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:23.108548    6337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:23.118941    6337 main.go:141] libmachine: STDOUT: 
	I0831 16:03:23.118962    6337 main.go:141] libmachine: STDERR: 
	I0831 16:03:23.119022    6337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2 +20000M
	I0831 16:03:23.127354    6337 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:23.127374    6337 main.go:141] libmachine: STDERR: 
	I0831 16:03:23.127386    6337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:23.127391    6337 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:23.127397    6337 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:23.127431    6337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3a:44:ca:8e:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/enable-default-cni-571000/disk.qcow2
	I0831 16:03:23.129148    6337 main.go:141] libmachine: STDOUT: 
	I0831 16:03:23.129179    6337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:23.129191    6337 client.go:171] duration metric: took 401.691542ms to LocalClient.Create
	I0831 16:03:25.131292    6337 start.go:128] duration metric: took 2.46135575s to createHost
	I0831 16:03:25.131335    6337 start.go:83] releasing machines lock for "enable-default-cni-571000", held for 2.461728792s
	W0831 16:03:25.131524    6337 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:25.139164    6337 out.go:201] 
	W0831 16:03:25.145205    6337 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:03:25.145455    6337 out.go:270] * 
	* 
	W0831 16:03:25.146386    6337 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:03:25.159119    6337 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.788998208s)

                                                
                                                
-- stdout --
	* [flannel-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-571000" primary control-plane node in "flannel-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:03:27.369545    6446 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:03:27.369665    6446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:27.369669    6446 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:27.369671    6446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:27.369806    6446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:03:27.370865    6446 out.go:352] Setting JSON to false
	I0831 16:03:27.387421    6446 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3771,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:03:27.387496    6446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:03:27.394846    6446 out.go:177] * [flannel-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:03:27.404651    6446 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:03:27.404734    6446 notify.go:220] Checking for updates...
	I0831 16:03:27.411636    6446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:03:27.415677    6446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:03:27.418675    6446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:03:27.421626    6446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:03:27.424643    6446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:03:27.427956    6446 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:03:27.428019    6446 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:03:27.428061    6446 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:03:27.431592    6446 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:03:27.438677    6446 start.go:297] selected driver: qemu2
	I0831 16:03:27.438683    6446 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:03:27.438688    6446 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:03:27.440935    6446 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:03:27.442514    6446 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:03:27.446705    6446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:03:27.446735    6446 cni.go:84] Creating CNI manager for "flannel"
	I0831 16:03:27.446738    6446 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0831 16:03:27.446763    6446 start.go:340] cluster config:
	{Name:flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:03:27.450430    6446 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:03:27.457632    6446 out.go:177] * Starting "flannel-571000" primary control-plane node in "flannel-571000" cluster
	I0831 16:03:27.461673    6446 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:03:27.461688    6446 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:03:27.461702    6446 cache.go:56] Caching tarball of preloaded images
	I0831 16:03:27.461768    6446 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:03:27.461774    6446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:03:27.461845    6446 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/flannel-571000/config.json ...
	I0831 16:03:27.461857    6446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/flannel-571000/config.json: {Name:mkf31d7dd5786be85802aabf9e1df2ec4d651a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:03:27.462079    6446 start.go:360] acquireMachinesLock for flannel-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:27.462111    6446 start.go:364] duration metric: took 26.125µs to acquireMachinesLock for "flannel-571000"
	I0831 16:03:27.462120    6446 start.go:93] Provisioning new machine with config: &{Name:flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:27.462148    6446 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:27.469711    6446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:27.486674    6446 start.go:159] libmachine.API.Create for "flannel-571000" (driver="qemu2")
	I0831 16:03:27.486703    6446 client.go:168] LocalClient.Create starting
	I0831 16:03:27.486769    6446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:27.486804    6446 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:27.486813    6446 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:27.486846    6446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:27.486869    6446 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:27.486879    6446 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:27.487233    6446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:27.646169    6446 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:27.684587    6446 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:27.684595    6446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:27.684807    6446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:27.694011    6446 main.go:141] libmachine: STDOUT: 
	I0831 16:03:27.694027    6446 main.go:141] libmachine: STDERR: 
	I0831 16:03:27.694074    6446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2 +20000M
	I0831 16:03:27.701911    6446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:27.701924    6446 main.go:141] libmachine: STDERR: 
	I0831 16:03:27.701941    6446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:27.701945    6446 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:27.701963    6446 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:27.701989    6446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4a:b8:a7:ae:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:27.703597    6446 main.go:141] libmachine: STDOUT: 
	I0831 16:03:27.703614    6446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:27.703630    6446 client.go:171] duration metric: took 216.928708ms to LocalClient.Create
	I0831 16:03:29.705670    6446 start.go:128] duration metric: took 2.243575041s to createHost
	I0831 16:03:29.705704    6446 start.go:83] releasing machines lock for "flannel-571000", held for 2.243653291s
	W0831 16:03:29.705744    6446 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:29.713606    6446 out.go:177] * Deleting "flannel-571000" in qemu2 ...
	W0831 16:03:29.739471    6446 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:29.739493    6446 start.go:729] Will try again in 5 seconds ...
	I0831 16:03:34.741696    6446 start.go:360] acquireMachinesLock for flannel-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:34.742340    6446 start.go:364] duration metric: took 484.917µs to acquireMachinesLock for "flannel-571000"
	I0831 16:03:34.742511    6446 start.go:93] Provisioning new machine with config: &{Name:flannel-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:34.742745    6446 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:34.762311    6446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:34.812131    6446 start.go:159] libmachine.API.Create for "flannel-571000" (driver="qemu2")
	I0831 16:03:34.812185    6446 client.go:168] LocalClient.Create starting
	I0831 16:03:34.812314    6446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:34.812374    6446 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:34.812392    6446 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:34.812447    6446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:34.812492    6446 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:34.812504    6446 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:34.812992    6446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:34.984700    6446 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:35.068704    6446 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:35.068717    6446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:35.068932    6446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:35.078379    6446 main.go:141] libmachine: STDOUT: 
	I0831 16:03:35.078401    6446 main.go:141] libmachine: STDERR: 
	I0831 16:03:35.078448    6446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2 +20000M
	I0831 16:03:35.086706    6446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:35.086765    6446 main.go:141] libmachine: STDERR: 
	I0831 16:03:35.086779    6446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:35.086783    6446 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:35.086793    6446 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:35.086819    6446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e6:ff:99:98:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/flannel-571000/disk.qcow2
	I0831 16:03:35.088545    6446 main.go:141] libmachine: STDOUT: 
	I0831 16:03:35.088560    6446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:35.088574    6446 client.go:171] duration metric: took 276.391ms to LocalClient.Create
	I0831 16:03:37.090643    6446 start.go:128] duration metric: took 2.347951916s to createHost
	I0831 16:03:37.090691    6446 start.go:83] releasing machines lock for "flannel-571000", held for 2.348354917s
	W0831 16:03:37.090764    6446 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:37.099253    6446 out.go:201] 
	W0831 16:03:37.108196    6446 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:03:37.108210    6446 out.go:270] * 
	* 
	W0831 16:03:37.108775    6446 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:03:37.117172    6446 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.809975s)

                                                
                                                
-- stdout --
	* [bridge-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-571000" primary control-plane node in "bridge-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:03:39.487872    6565 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:03:39.487993    6565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:39.487996    6565 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:39.487998    6565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:39.488142    6565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:03:39.489261    6565 out.go:352] Setting JSON to false
	I0831 16:03:39.506024    6565 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3783,"bootTime":1725141636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:03:39.506096    6565 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:03:39.511634    6565 out.go:177] * [bridge-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:03:39.519393    6565 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:03:39.519421    6565 notify.go:220] Checking for updates...
	I0831 16:03:39.527519    6565 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:03:39.530543    6565 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:03:39.533466    6565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:03:39.536593    6565 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:03:39.539458    6565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:03:39.542777    6565 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:03:39.542841    6565 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:03:39.542887    6565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:03:39.554500    6565 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:03:39.557501    6565 start.go:297] selected driver: qemu2
	I0831 16:03:39.557508    6565 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:03:39.557517    6565 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:03:39.559892    6565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:03:39.562427    6565 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:03:39.565521    6565 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:03:39.565549    6565 cni.go:84] Creating CNI manager for "bridge"
	I0831 16:03:39.565553    6565 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:03:39.565584    6565 start.go:340] cluster config:
	{Name:bridge-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:03:39.569231    6565 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:03:39.577445    6565 out.go:177] * Starting "bridge-571000" primary control-plane node in "bridge-571000" cluster
	I0831 16:03:39.581533    6565 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:03:39.581545    6565 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:03:39.581555    6565 cache.go:56] Caching tarball of preloaded images
	I0831 16:03:39.581609    6565 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:03:39.581613    6565 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:03:39.581671    6565 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/bridge-571000/config.json ...
	I0831 16:03:39.581681    6565 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/bridge-571000/config.json: {Name:mkfcc0c71fbe93901bf70e28c785bc6f242e83ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:03:39.581891    6565 start.go:360] acquireMachinesLock for bridge-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:39.581921    6565 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "bridge-571000"
	I0831 16:03:39.581930    6565 start.go:93] Provisioning new machine with config: &{Name:bridge-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:39.581961    6565 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:39.589533    6565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:39.604872    6565 start.go:159] libmachine.API.Create for "bridge-571000" (driver="qemu2")
	I0831 16:03:39.604906    6565 client.go:168] LocalClient.Create starting
	I0831 16:03:39.604969    6565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:39.604997    6565 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:39.605007    6565 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:39.605053    6565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:39.605074    6565 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:39.605084    6565 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:39.605452    6565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:39.766369    6565 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:39.821927    6565 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:39.821932    6565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:39.822146    6565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:39.831771    6565 main.go:141] libmachine: STDOUT: 
	I0831 16:03:39.831791    6565 main.go:141] libmachine: STDERR: 
	I0831 16:03:39.831846    6565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2 +20000M
	I0831 16:03:39.840096    6565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:39.840128    6565 main.go:141] libmachine: STDERR: 
	I0831 16:03:39.840143    6565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:39.840146    6565 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:39.840160    6565 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:39.840184    6565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:f0:26:e3:a4:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:39.841888    6565 main.go:141] libmachine: STDOUT: 
	I0831 16:03:39.841905    6565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:39.841924    6565 client.go:171] duration metric: took 237.017ms to LocalClient.Create
	I0831 16:03:41.844102    6565 start.go:128] duration metric: took 2.262174s to createHost
	I0831 16:03:41.844186    6565 start.go:83] releasing machines lock for "bridge-571000", held for 2.262321708s
	W0831 16:03:41.844289    6565 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:41.850819    6565 out.go:177] * Deleting "bridge-571000" in qemu2 ...
	W0831 16:03:41.884766    6565 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:41.884804    6565 start.go:729] Will try again in 5 seconds ...
	I0831 16:03:46.886924    6565 start.go:360] acquireMachinesLock for bridge-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:46.887401    6565 start.go:364] duration metric: took 387.708µs to acquireMachinesLock for "bridge-571000"
	I0831 16:03:46.887550    6565 start.go:93] Provisioning new machine with config: &{Name:bridge-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:46.887842    6565 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:46.893534    6565 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:46.943216    6565 start.go:159] libmachine.API.Create for "bridge-571000" (driver="qemu2")
	I0831 16:03:46.943267    6565 client.go:168] LocalClient.Create starting
	I0831 16:03:46.943381    6565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:46.943451    6565 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:46.943466    6565 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:46.943527    6565 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:46.943575    6565 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:46.943591    6565 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:46.944419    6565 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:47.115435    6565 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:47.203117    6565 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:47.203127    6565 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:47.203352    6565 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:47.212792    6565 main.go:141] libmachine: STDOUT: 
	I0831 16:03:47.212813    6565 main.go:141] libmachine: STDERR: 
	I0831 16:03:47.212863    6565 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2 +20000M
	I0831 16:03:47.221030    6565 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:47.221051    6565 main.go:141] libmachine: STDERR: 
	I0831 16:03:47.221063    6565 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:47.221069    6565 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:47.221079    6565 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:47.221108    6565 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:74:04:8b:42:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/bridge-571000/disk.qcow2
	I0831 16:03:47.222847    6565 main.go:141] libmachine: STDOUT: 
	I0831 16:03:47.222866    6565 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:47.222878    6565 client.go:171] duration metric: took 279.615125ms to LocalClient.Create
	I0831 16:03:49.224950    6565 start.go:128] duration metric: took 2.337154s to createHost
	I0831 16:03:49.224985    6565 start.go:83] releasing machines lock for "bridge-571000", held for 2.337632875s
	W0831 16:03:49.225130    6565 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:49.241331    6565 out.go:201] 
	W0831 16:03:49.244397    6565 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:03:49.244405    6565 out.go:270] * 
	* 
	W0831 16:03:49.245228    6565 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:03:49.260327    6565 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-571000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.861425625s)

                                                
                                                
-- stdout --
	* [kubenet-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-571000" primary control-plane node in "kubenet-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:03:51.430819    6679 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:03:51.430958    6679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:51.430961    6679 out.go:358] Setting ErrFile to fd 2...
	I0831 16:03:51.430968    6679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:03:51.431110    6679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:03:51.432323    6679 out.go:352] Setting JSON to false
	I0831 16:03:51.449077    6679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3795,"bootTime":1725141636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:03:51.449156    6679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:03:51.454290    6679 out.go:177] * [kubenet-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:03:51.462167    6679 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:03:51.462219    6679 notify.go:220] Checking for updates...
	I0831 16:03:51.469099    6679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:03:51.472101    6679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:03:51.475045    6679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:03:51.478088    6679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:03:51.481062    6679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:03:51.484345    6679 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:03:51.484409    6679 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:03:51.484461    6679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:03:51.489101    6679 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:03:51.496051    6679 start.go:297] selected driver: qemu2
	I0831 16:03:51.496058    6679 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:03:51.496064    6679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:03:51.498466    6679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:03:51.501124    6679 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:03:51.504239    6679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:03:51.504285    6679 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0831 16:03:51.504325    6679 start.go:340] cluster config:
	{Name:kubenet-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:03:51.508008    6679 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:03:51.514079    6679 out.go:177] * Starting "kubenet-571000" primary control-plane node in "kubenet-571000" cluster
	I0831 16:03:51.517040    6679 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:03:51.517054    6679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:03:51.517063    6679 cache.go:56] Caching tarball of preloaded images
	I0831 16:03:51.517118    6679 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:03:51.517123    6679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:03:51.517180    6679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kubenet-571000/config.json ...
	I0831 16:03:51.517190    6679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/kubenet-571000/config.json: {Name:mk291594f33c97f0a58c904a0ad7ddf1c7ae9389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:03:51.517399    6679 start.go:360] acquireMachinesLock for kubenet-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:51.517431    6679 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "kubenet-571000"
	I0831 16:03:51.517440    6679 start.go:93] Provisioning new machine with config: &{Name:kubenet-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:51.517464    6679 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:51.526088    6679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:51.541671    6679 start.go:159] libmachine.API.Create for "kubenet-571000" (driver="qemu2")
	I0831 16:03:51.541697    6679 client.go:168] LocalClient.Create starting
	I0831 16:03:51.541765    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:51.541796    6679 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:51.541806    6679 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:51.541849    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:51.541871    6679 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:51.541880    6679 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:51.542229    6679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:51.702709    6679 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:51.842207    6679 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:51.842214    6679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:51.842426    6679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:51.851884    6679 main.go:141] libmachine: STDOUT: 
	I0831 16:03:51.851900    6679 main.go:141] libmachine: STDERR: 
	I0831 16:03:51.851966    6679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2 +20000M
	I0831 16:03:51.859972    6679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:51.859985    6679 main.go:141] libmachine: STDERR: 
	I0831 16:03:51.860005    6679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:51.860010    6679 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:51.860021    6679 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:51.860051    6679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:44:46:5e:00:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:51.861673    6679 main.go:141] libmachine: STDOUT: 
	I0831 16:03:51.861688    6679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:51.861711    6679 client.go:171] duration metric: took 320.013042ms to LocalClient.Create
	I0831 16:03:53.863880    6679 start.go:128] duration metric: took 2.34645s to createHost
	I0831 16:03:53.864001    6679 start.go:83] releasing machines lock for "kubenet-571000", held for 2.346616625s
	W0831 16:03:53.864091    6679 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:53.882394    6679 out.go:177] * Deleting "kubenet-571000" in qemu2 ...
	W0831 16:03:53.915581    6679 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:03:53.915631    6679 start.go:729] Will try again in 5 seconds ...
	I0831 16:03:58.917699    6679 start.go:360] acquireMachinesLock for kubenet-571000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:03:58.918170    6679 start.go:364] duration metric: took 382.583µs to acquireMachinesLock for "kubenet-571000"
	I0831 16:03:58.918258    6679 start.go:93] Provisioning new machine with config: &{Name:kubenet-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:03:58.918481    6679 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:03:58.927164    6679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0831 16:03:58.976413    6679 start.go:159] libmachine.API.Create for "kubenet-571000" (driver="qemu2")
	I0831 16:03:58.976474    6679 client.go:168] LocalClient.Create starting
	I0831 16:03:58.976606    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:03:58.976679    6679 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:58.976697    6679 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:58.976764    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:03:58.976813    6679 main.go:141] libmachine: Decoding PEM data...
	I0831 16:03:58.976828    6679 main.go:141] libmachine: Parsing certificate...
	I0831 16:03:58.977541    6679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:03:59.147914    6679 main.go:141] libmachine: Creating SSH key...
	I0831 16:03:59.194998    6679 main.go:141] libmachine: Creating Disk image...
	I0831 16:03:59.195003    6679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:03:59.195206    6679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:59.204726    6679 main.go:141] libmachine: STDOUT: 
	I0831 16:03:59.204750    6679 main.go:141] libmachine: STDERR: 
	I0831 16:03:59.204813    6679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2 +20000M
	I0831 16:03:59.213892    6679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:03:59.213919    6679 main.go:141] libmachine: STDERR: 
	I0831 16:03:59.213933    6679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:59.213938    6679 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:03:59.213948    6679 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:03:59.213994    6679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e9:8e:e6:39:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/kubenet-571000/disk.qcow2
	I0831 16:03:59.215907    6679 main.go:141] libmachine: STDOUT: 
	I0831 16:03:59.215924    6679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:03:59.215937    6679 client.go:171] duration metric: took 239.462417ms to LocalClient.Create
	I0831 16:04:01.217961    6679 start.go:128] duration metric: took 2.299528417s to createHost
	I0831 16:04:01.217995    6679 start.go:83] releasing machines lock for "kubenet-571000", held for 2.299854458s
	W0831 16:04:01.218119    6679 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:01.232553    6679 out.go:201] 
	W0831 16:04:01.236768    6679 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:01.236783    6679 out.go:270] * 
	* 
	W0831 16:04:01.237567    6679 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:01.250496    6679 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.810582s)

                                                
                                                
-- stdout --
	* [old-k8s-version-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-797000" primary control-plane node in "old-k8s-version-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:03.423540    6790 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:03.423684    6790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:03.423687    6790 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:03.423690    6790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:03.423816    6790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:03.424877    6790 out.go:352] Setting JSON to false
	I0831 16:04:03.441141    6790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3807,"bootTime":1725141636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:03.441212    6790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:03.448226    6790 out.go:177] * [old-k8s-version-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:03.456060    6790 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:03.456135    6790 notify.go:220] Checking for updates...
	I0831 16:04:03.463987    6790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:03.467024    6790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:03.470017    6790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:03.473018    6790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:03.476052    6790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:03.479236    6790 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:03.479303    6790 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:04:03.479361    6790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:03.483002    6790 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:04:03.490002    6790 start.go:297] selected driver: qemu2
	I0831 16:04:03.490006    6790 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:04:03.490013    6790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:03.492260    6790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:04:03.494985    6790 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:04:03.498140    6790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:03.498162    6790 cni.go:84] Creating CNI manager for ""
	I0831 16:04:03.498169    6790 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 16:04:03.498207    6790 start.go:340] cluster config:
	{Name:old-k8s-version-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:03.501737    6790 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:03.508998    6790 out.go:177] * Starting "old-k8s-version-797000" primary control-plane node in "old-k8s-version-797000" cluster
	I0831 16:04:03.512800    6790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 16:04:03.512814    6790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 16:04:03.512825    6790 cache.go:56] Caching tarball of preloaded images
	I0831 16:04:03.512876    6790 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:04:03.512881    6790 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 16:04:03.512930    6790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/old-k8s-version-797000/config.json ...
	I0831 16:04:03.512940    6790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/old-k8s-version-797000/config.json: {Name:mk27136f95f301df624029853970b32e60a62724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:04:03.513240    6790 start.go:360] acquireMachinesLock for old-k8s-version-797000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:03.513272    6790 start.go:364] duration metric: took 25µs to acquireMachinesLock for "old-k8s-version-797000"
	I0831 16:04:03.513282    6790 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:03.513317    6790 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:03.520824    6790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:03.535975    6790 start.go:159] libmachine.API.Create for "old-k8s-version-797000" (driver="qemu2")
	I0831 16:04:03.536004    6790 client.go:168] LocalClient.Create starting
	I0831 16:04:03.536064    6790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:03.536094    6790 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:03.536104    6790 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:03.536143    6790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:03.536166    6790 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:03.536176    6790 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:03.536545    6790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:03.697490    6790 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:03.800377    6790 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:03.800384    6790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:03.800615    6790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:03.809934    6790 main.go:141] libmachine: STDOUT: 
	I0831 16:04:03.809960    6790 main.go:141] libmachine: STDERR: 
	I0831 16:04:03.810014    6790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2 +20000M
	I0831 16:04:03.817834    6790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:03.817855    6790 main.go:141] libmachine: STDERR: 
	I0831 16:04:03.817880    6790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:03.817886    6790 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:03.817901    6790 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:03.817928    6790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:31:20:d5:35:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:03.819521    6790 main.go:141] libmachine: STDOUT: 
	I0831 16:04:03.819537    6790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:03.819556    6790 client.go:171] duration metric: took 283.554875ms to LocalClient.Create
	I0831 16:04:05.821704    6790 start.go:128] duration metric: took 2.308424583s to createHost
	I0831 16:04:05.821819    6790 start.go:83] releasing machines lock for "old-k8s-version-797000", held for 2.308604375s
	W0831 16:04:05.821880    6790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:05.829239    6790 out.go:177] * Deleting "old-k8s-version-797000" in qemu2 ...
	W0831 16:04:05.862078    6790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:05.862117    6790 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:10.864243    6790 start.go:360] acquireMachinesLock for old-k8s-version-797000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:10.864831    6790 start.go:364] duration metric: took 485.75µs to acquireMachinesLock for "old-k8s-version-797000"
	I0831 16:04:10.865001    6790 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:10.865332    6790 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:10.874825    6790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:10.925423    6790 start.go:159] libmachine.API.Create for "old-k8s-version-797000" (driver="qemu2")
	I0831 16:04:10.925491    6790 client.go:168] LocalClient.Create starting
	I0831 16:04:10.925588    6790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:10.925656    6790 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:10.925670    6790 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:10.925731    6790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:10.925775    6790 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:10.925792    6790 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:10.926325    6790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:11.095234    6790 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:11.149454    6790 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:11.149461    6790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:11.149676    6790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:11.158909    6790 main.go:141] libmachine: STDOUT: 
	I0831 16:04:11.158925    6790 main.go:141] libmachine: STDERR: 
	I0831 16:04:11.158985    6790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2 +20000M
	I0831 16:04:11.166968    6790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:11.166987    6790 main.go:141] libmachine: STDERR: 
	I0831 16:04:11.166999    6790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:11.167005    6790 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:11.167013    6790 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:11.167044    6790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c2:68:e9:ed:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:11.168676    6790 main.go:141] libmachine: STDOUT: 
	I0831 16:04:11.168693    6790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:11.168707    6790 client.go:171] duration metric: took 243.217ms to LocalClient.Create
	I0831 16:04:13.170736    6790 start.go:128] duration metric: took 2.305455708s to createHost
	I0831 16:04:13.170756    6790 start.go:83] releasing machines lock for "old-k8s-version-797000", held for 2.305972416s
	W0831 16:04:13.170835    6790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:13.178065    6790 out.go:201] 
	W0831 16:04:13.182094    6790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:13.182109    6790 out.go:270] * 
	* 
	W0831 16:04:13.182683    6790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:13.197111    6790 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (32.037292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-797000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-797000 create -f testdata/busybox.yaml: exit status 1 (29.683917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-797000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (39.326125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (30.36475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-797000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-797000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-797000 describe deploy/metrics-server -n kube-system: exit status 1 (27.325333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-797000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (29.930208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.194280209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-797000" primary control-plane node in "old-k8s-version-797000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:15.656574    6832 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:15.656735    6832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:15.656739    6832 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:15.656741    6832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:15.656860    6832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:15.657859    6832 out.go:352] Setting JSON to false
	I0831 16:04:15.674273    6832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3819,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:15.674344    6832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:15.679714    6832 out.go:177] * [old-k8s-version-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:15.685589    6832 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:15.685637    6832 notify.go:220] Checking for updates...
	I0831 16:04:15.692681    6832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:15.695685    6832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:15.698690    6832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:15.701777    6832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:15.704563    6832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:15.707927    6832 config.go:182] Loaded profile config "old-k8s-version-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0831 16:04:15.715776    6832 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0831 16:04:15.717237    6832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:15.721619    6832 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 16:04:15.728532    6832 start.go:297] selected driver: qemu2
	I0831 16:04:15.728538    6832 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:15.728593    6832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:15.731071    6832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:15.731091    6832 cni.go:84] Creating CNI manager for ""
	I0831 16:04:15.731099    6832 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 16:04:15.731124    6832 start.go:340] cluster config:
	{Name:old-k8s-version-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-797000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:15.734685    6832 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:15.743660    6832 out.go:177] * Starting "old-k8s-version-797000" primary control-plane node in "old-k8s-version-797000" cluster
	I0831 16:04:15.747687    6832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 16:04:15.747705    6832 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 16:04:15.747716    6832 cache.go:56] Caching tarball of preloaded images
	I0831 16:04:15.747789    6832 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:04:15.747796    6832 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 16:04:15.747859    6832 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/old-k8s-version-797000/config.json ...
	I0831 16:04:15.748401    6832 start.go:360] acquireMachinesLock for old-k8s-version-797000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:15.748431    6832 start.go:364] duration metric: took 23.833µs to acquireMachinesLock for "old-k8s-version-797000"
	I0831 16:04:15.748440    6832 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:15.748445    6832 fix.go:54] fixHost starting: 
	I0831 16:04:15.748566    6832 fix.go:112] recreateIfNeeded on old-k8s-version-797000: state=Stopped err=<nil>
	W0831 16:04:15.748575    6832 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:15.752522    6832 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-797000" ...
	I0831 16:04:15.759634    6832 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:15.759678    6832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c2:68:e9:ed:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:15.761553    6832 main.go:141] libmachine: STDOUT: 
	I0831 16:04:15.761574    6832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:15.761601    6832 fix.go:56] duration metric: took 13.15725ms for fixHost
	I0831 16:04:15.761605    6832 start.go:83] releasing machines lock for "old-k8s-version-797000", held for 13.169708ms
	W0831 16:04:15.761611    6832 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:15.761649    6832 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:15.761654    6832 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:20.763737    6832 start.go:360] acquireMachinesLock for old-k8s-version-797000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:20.764241    6832 start.go:364] duration metric: took 394.666µs to acquireMachinesLock for "old-k8s-version-797000"
	I0831 16:04:20.764401    6832 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:20.764423    6832 fix.go:54] fixHost starting: 
	I0831 16:04:20.765201    6832 fix.go:112] recreateIfNeeded on old-k8s-version-797000: state=Stopped err=<nil>
	W0831 16:04:20.765228    6832 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:20.773585    6832 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-797000" ...
	I0831 16:04:20.777672    6832 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:20.777959    6832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c2:68:e9:ed:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/old-k8s-version-797000/disk.qcow2
	I0831 16:04:20.787214    6832 main.go:141] libmachine: STDOUT: 
	I0831 16:04:20.787279    6832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:20.787361    6832 fix.go:56] duration metric: took 22.941875ms for fixHost
	I0831 16:04:20.787374    6832 start.go:83] releasing machines lock for "old-k8s-version-797000", held for 23.110625ms
	W0831 16:04:20.787541    6832 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:20.795600    6832 out.go:201] 
	W0831 16:04:20.799719    6832 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:20.799752    6832 out.go:270] * 
	* 
	W0831 16:04:20.802497    6832 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:20.809635    6832 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-797000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (61.79475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-797000" does not exist
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (31.587917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-797000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.732208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (29.570542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-797000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (29.510083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-797000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-797000 --alsologtostderr -v=1: exit status 83 (40.918208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-797000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-797000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:21.076177    6851 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:21.076589    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:21.076593    6851 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:21.076595    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:21.076759    6851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:21.076978    6851 out.go:352] Setting JSON to false
	I0831 16:04:21.076987    6851 mustload.go:65] Loading cluster: old-k8s-version-797000
	I0831 16:04:21.077186    6851 config.go:182] Loaded profile config "old-k8s-version-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0831 16:04:21.081255    6851 out.go:177] * The control-plane node old-k8s-version-797000 host is not running: state=Stopped
	I0831 16:04:21.085113    6851 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-797000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-797000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (29.952042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (29.584667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.968002s)

                                                
                                                
-- stdout --
	* [no-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-233000" primary control-plane node in "no-preload-233000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-233000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:21.391390    6868 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:21.391529    6868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:21.391532    6868 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:21.391534    6868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:21.391658    6868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:21.392721    6868 out.go:352] Setting JSON to false
	I0831 16:04:21.409822    6868 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3825,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:21.409896    6868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:21.414190    6868 out.go:177] * [no-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:21.420207    6868 notify.go:220] Checking for updates...
	I0831 16:04:21.424211    6868 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:21.427989    6868 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:21.431131    6868 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:21.434194    6868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:21.437140    6868 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:21.440172    6868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:21.443547    6868 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:21.443604    6868 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:04:21.443645    6868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:21.448138    6868 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:04:21.455100    6868 start.go:297] selected driver: qemu2
	I0831 16:04:21.455106    6868 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:04:21.455111    6868 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:21.457486    6868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:04:21.460142    6868 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:04:21.463186    6868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:21.463212    6868 cni.go:84] Creating CNI manager for ""
	I0831 16:04:21.463221    6868 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:04:21.463225    6868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:04:21.463266    6868 start.go:340] cluster config:
	{Name:no-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:21.467113    6868 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.475060    6868 out.go:177] * Starting "no-preload-233000" primary control-plane node in "no-preload-233000" cluster
	I0831 16:04:21.479119    6868 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:04:21.479185    6868 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/no-preload-233000/config.json ...
	I0831 16:04:21.479198    6868 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/no-preload-233000/config.json: {Name:mk580f89e42a40fa4f76f9f6105c644df5364ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:04:21.479203    6868 cache.go:107] acquiring lock: {Name:mkfc1e7c75a166fa58acad4eff3515f7385523bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479212    6868 cache.go:107] acquiring lock: {Name:mk1f178bf383a406300283d3e7e702e314b1ef04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479296    6868 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0831 16:04:21.479304    6868 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103µs
	I0831 16:04:21.479311    6868 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0831 16:04:21.479331    6868 cache.go:107] acquiring lock: {Name:mk968f78147acff5b36e2411bbef4d6b6a2d11c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479346    6868 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0831 16:04:21.479330    6868 cache.go:107] acquiring lock: {Name:mkc3b1569a5a8144f24e7e294a13cce3945558f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479410    6868 cache.go:107] acquiring lock: {Name:mk19ea87619a7da8522ca6cc90f12e2337925a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479431    6868 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0831 16:04:21.479445    6868 start.go:360] acquireMachinesLock for no-preload-233000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:21.479429    6868 cache.go:107] acquiring lock: {Name:mk7fe31350ad83528ddd10eff9b58b034f6f30b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479458    6868 cache.go:107] acquiring lock: {Name:mke8e716c910990bd8e336b5adba7963d884f39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479537    6868 start.go:364] duration metric: took 86.917µs to acquireMachinesLock for "no-preload-233000"
	I0831 16:04:21.479565    6868 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0831 16:04:21.479547    6868 start.go:93] Provisioning new machine with config: &{Name:no-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:21.479566    6868 cache.go:107] acquiring lock: {Name:mkb42c9cb990c15dfc17a2350246d9b412f65219 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:21.479581    6868 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:21.479696    6868 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0831 16:04:21.479744    6868 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0831 16:04:21.479667    6868 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0831 16:04:21.479677    6868 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0831 16:04:21.483131    6868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:21.493737    6868 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0831 16:04:21.493772    6868 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0831 16:04:21.494395    6868 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0831 16:04:21.495112    6868 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0831 16:04:21.495999    6868 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0831 16:04:21.496112    6868 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0831 16:04:21.496234    6868 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0831 16:04:21.500146    6868 start.go:159] libmachine.API.Create for "no-preload-233000" (driver="qemu2")
	I0831 16:04:21.500183    6868 client.go:168] LocalClient.Create starting
	I0831 16:04:21.500272    6868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:21.500303    6868 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:21.500313    6868 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:21.500415    6868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:21.500441    6868 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:21.500446    6868 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:21.500802    6868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:21.666619    6868 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:21.736358    6868 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:21.736377    6868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:21.736579    6868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:21.747434    6868 main.go:141] libmachine: STDOUT: 
	I0831 16:04:21.747454    6868 main.go:141] libmachine: STDERR: 
	I0831 16:04:21.747499    6868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2 +20000M
	I0831 16:04:21.756567    6868 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:21.756585    6868 main.go:141] libmachine: STDERR: 
	I0831 16:04:21.756605    6868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:21.756610    6868 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:21.756621    6868 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:21.756650    6868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:a8:67:b3:cf:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:21.758869    6868 main.go:141] libmachine: STDOUT: 
	I0831 16:04:21.758918    6868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:21.758938    6868 client.go:171] duration metric: took 258.757917ms to LocalClient.Create
	I0831 16:04:21.898472    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0831 16:04:21.911017    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0831 16:04:21.940071    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0831 16:04:21.961059    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0831 16:04:21.962601    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0831 16:04:21.988681    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0831 16:04:21.994412    6868 cache.go:162] opening:  /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0831 16:04:22.180157    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0831 16:04:22.180173    6868 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 700.870625ms
	I0831 16:04:22.180181    6868 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0831 16:04:23.758971    6868 start.go:128] duration metric: took 2.279394292s to createHost
	I0831 16:04:23.758995    6868 start.go:83] releasing machines lock for "no-preload-233000", held for 2.279519625s
	W0831 16:04:23.759030    6868 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:23.773597    6868 out.go:177] * Deleting "no-preload-233000" in qemu2 ...
	W0831 16:04:23.789120    6868 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:23.789132    6868 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:24.957556    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0831 16:04:24.957592    6868 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 3.4784815s
	I0831 16:04:24.957607    6868 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0831 16:04:25.028121    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0831 16:04:25.028133    6868 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.548882458s
	I0831 16:04:25.028140    6868 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0831 16:04:25.290004    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0831 16:04:25.290048    6868 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 3.81085475s
	I0831 16:04:25.290059    6868 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0831 16:04:25.479698    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0831 16:04:25.479715    6868 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.0003735s
	I0831 16:04:25.479722    6868 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0831 16:04:25.873522    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0831 16:04:25.873557    6868 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.39430425s
	I0831 16:04:25.873573    6868 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0831 16:04:28.789279    6868 start.go:360] acquireMachinesLock for no-preload-233000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:28.789360    6868 start.go:364] duration metric: took 70.375µs to acquireMachinesLock for "no-preload-233000"
	I0831 16:04:28.789373    6868 start.go:93] Provisioning new machine with config: &{Name:no-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:28.789407    6868 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:28.797803    6868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:28.814172    6868 start.go:159] libmachine.API.Create for "no-preload-233000" (driver="qemu2")
	I0831 16:04:28.814204    6868 client.go:168] LocalClient.Create starting
	I0831 16:04:28.814270    6868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:28.814326    6868 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:28.814335    6868 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:28.814385    6868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:28.814407    6868 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:28.814416    6868 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:28.814718    6868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:29.153907    6868 cache.go:157] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0831 16:04:29.153933    6868 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.674634208s
	I0831 16:04:29.153942    6868 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0831 16:04:29.153965    6868 cache.go:87] Successfully saved all images to host disk.
	I0831 16:04:29.198406    6868 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:29.268853    6868 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:29.268861    6868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:29.269104    6868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:29.278916    6868 main.go:141] libmachine: STDOUT: 
	I0831 16:04:29.278940    6868 main.go:141] libmachine: STDERR: 
	I0831 16:04:29.279018    6868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2 +20000M
	I0831 16:04:29.287456    6868 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:29.287474    6868 main.go:141] libmachine: STDERR: 
	I0831 16:04:29.287485    6868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:29.287490    6868 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:29.287506    6868 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:29.287538    6868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ae:af:70:7d:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:29.289312    6868 main.go:141] libmachine: STDOUT: 
	I0831 16:04:29.289330    6868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:29.289343    6868 client.go:171] duration metric: took 475.149333ms to LocalClient.Create
	I0831 16:04:31.291481    6868 start.go:128] duration metric: took 2.502116833s to createHost
	I0831 16:04:31.291551    6868 start.go:83] releasing machines lock for "no-preload-233000", held for 2.502254833s
	W0831 16:04:31.291977    6868 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:31.301477    6868 out.go:201] 
	W0831 16:04:31.307614    6868 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:31.307687    6868 out.go:270] * 
	* 
	W0831 16:04:31.309770    6868 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:31.318595    6868 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (58.57925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-233000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-233000 create -f testdata/busybox.yaml: exit status 1 (29.135041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-233000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (30.296459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (30.197041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-233000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-233000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-233000 describe deploy/metrics-server -n kube-system: exit status 1 (27.714416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-233000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (29.26325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.175404667s)

                                                
                                                
-- stdout --
	* [no-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-233000" primary control-plane node in "no-preload-233000" cluster
	* Restarting existing qemu2 VM for "no-preload-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:35.045958    6949 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:35.046079    6949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:35.046083    6949 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:35.046085    6949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:35.046202    6949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:35.047235    6949 out.go:352] Setting JSON to false
	I0831 16:04:35.063448    6949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3839,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:35.063526    6949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:35.066600    6949 out.go:177] * [no-preload-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:35.073516    6949 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:35.073569    6949 notify.go:220] Checking for updates...
	I0831 16:04:35.080399    6949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:35.083449    6949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:35.086477    6949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:35.087789    6949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:35.090447    6949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:35.093782    6949 config.go:182] Loaded profile config "no-preload-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:35.094042    6949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:35.098360    6949 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 16:04:35.105463    6949 start.go:297] selected driver: qemu2
	I0831 16:04:35.105468    6949 start.go:901] validating driver "qemu2" against &{Name:no-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:35.105528    6949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:35.107729    6949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:35.107754    6949 cni.go:84] Creating CNI manager for ""
	I0831 16:04:35.107760    6949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:04:35.107796    6949 start.go:340] cluster config:
	{Name:no-preload-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-233000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:35.111085    6949 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.119445    6949 out.go:177] * Starting "no-preload-233000" primary control-plane node in "no-preload-233000" cluster
	I0831 16:04:35.123519    6949 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:04:35.123580    6949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/no-preload-233000/config.json ...
	I0831 16:04:35.123612    6949 cache.go:107] acquiring lock: {Name:mkfc1e7c75a166fa58acad4eff3515f7385523bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123609    6949 cache.go:107] acquiring lock: {Name:mk7fe31350ad83528ddd10eff9b58b034f6f30b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123617    6949 cache.go:107] acquiring lock: {Name:mkc3b1569a5a8144f24e7e294a13cce3945558f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123667    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0831 16:04:35.123668    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0831 16:04:35.123671    6949 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 64.708µs
	I0831 16:04:35.123673    6949 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.125µs
	I0831 16:04:35.123677    6949 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0831 16:04:35.123677    6949 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0831 16:04:35.123684    6949 cache.go:107] acquiring lock: {Name:mk968f78147acff5b36e2411bbef4d6b6a2d11c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123690    6949 cache.go:107] acquiring lock: {Name:mkb42c9cb990c15dfc17a2350246d9b412f65219 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123703    6949 cache.go:107] acquiring lock: {Name:mk1f178bf383a406300283d3e7e702e314b1ef04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123717    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0831 16:04:35.123720    6949 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 36.625µs
	I0831 16:04:35.123724    6949 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0831 16:04:35.123729    6949 cache.go:107] acquiring lock: {Name:mke8e716c910990bd8e336b5adba7963d884f39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123744    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0831 16:04:35.123725    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0831 16:04:35.123748    6949 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 45.667µs
	I0831 16:04:35.123753    6949 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0831 16:04:35.123760    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0831 16:04:35.123693    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0831 16:04:35.123764    6949 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 34.917µs
	I0831 16:04:35.123765    6949 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 155.083µs
	I0831 16:04:35.123769    6949 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0831 16:04:35.123767    6949 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0831 16:04:35.123786    6949 cache.go:107] acquiring lock: {Name:mk19ea87619a7da8522ca6cc90f12e2337925a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:35.123783    6949 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 63.959µs
	I0831 16:04:35.123800    6949 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0831 16:04:35.123824    6949 cache.go:115] /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0831 16:04:35.123828    6949 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 59.291µs
	I0831 16:04:35.123831    6949 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0831 16:04:35.123835    6949 cache.go:87] Successfully saved all images to host disk.
	I0831 16:04:35.124007    6949 start.go:360] acquireMachinesLock for no-preload-233000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:35.124037    6949 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "no-preload-233000"
	I0831 16:04:35.124045    6949 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:35.124049    6949 fix.go:54] fixHost starting: 
	I0831 16:04:35.124157    6949 fix.go:112] recreateIfNeeded on no-preload-233000: state=Stopped err=<nil>
	W0831 16:04:35.124165    6949 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:35.132501    6949 out.go:177] * Restarting existing qemu2 VM for "no-preload-233000" ...
	I0831 16:04:35.136472    6949 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:35.136504    6949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ae:af:70:7d:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:35.138332    6949 main.go:141] libmachine: STDOUT: 
	I0831 16:04:35.138351    6949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:35.138377    6949 fix.go:56] duration metric: took 14.327458ms for fixHost
	I0831 16:04:35.138380    6949 start.go:83] releasing machines lock for "no-preload-233000", held for 14.339208ms
	W0831 16:04:35.138387    6949 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:35.138416    6949 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:35.138420    6949 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:40.140468    6949 start.go:360] acquireMachinesLock for no-preload-233000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:40.140882    6949 start.go:364] duration metric: took 342.708µs to acquireMachinesLock for "no-preload-233000"
	I0831 16:04:40.140971    6949 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:40.140989    6949 fix.go:54] fixHost starting: 
	I0831 16:04:40.141518    6949 fix.go:112] recreateIfNeeded on no-preload-233000: state=Stopped err=<nil>
	W0831 16:04:40.141535    6949 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:40.149990    6949 out.go:177] * Restarting existing qemu2 VM for "no-preload-233000" ...
	I0831 16:04:40.154084    6949 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:40.154218    6949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ae:af:70:7d:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/no-preload-233000/disk.qcow2
	I0831 16:04:40.162159    6949 main.go:141] libmachine: STDOUT: 
	I0831 16:04:40.162214    6949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:40.162294    6949 fix.go:56] duration metric: took 21.30625ms for fixHost
	I0831 16:04:40.162307    6949 start.go:83] releasing machines lock for "no-preload-233000", held for 21.409375ms
	W0831 16:04:40.162472    6949 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:40.169996    6949 out.go:201] 
	W0831 16:04:40.173111    6949 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:40.173127    6949 out.go:270] * 
	* 
	W0831 16:04:40.174708    6949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:40.187977    6949 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-233000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (53.855917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-233000" does not exist
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (30.456667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-233000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.794834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (29.969625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-233000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (29.503709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-233000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-233000 --alsologtostderr -v=1: exit status 83 (38.678166ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-233000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-233000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:40.435413    6969 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:40.435555    6969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:40.435558    6969 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:40.435561    6969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:40.435705    6969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:40.435944    6969 out.go:352] Setting JSON to false
	I0831 16:04:40.435950    6969 mustload.go:65] Loading cluster: no-preload-233000
	I0831 16:04:40.436152    6969 config.go:182] Loaded profile config "no-preload-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:40.439273    6969 out.go:177] * The control-plane node no-preload-233000 host is not running: state=Stopped
	I0831 16:04:40.442011    6969 out.go:177]   To start a cluster, run: "minikube start -p no-preload-233000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-233000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (28.455125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (29.705584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "no-preload-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.864115334s)

                                                
                                                
-- stdout --
	* [embed-certs-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-169000" primary control-plane node in "embed-certs-169000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:40.748710    6986 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:40.748844    6986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:40.748847    6986 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:40.748849    6986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:40.748997    6986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:40.750072    6986 out.go:352] Setting JSON to false
	I0831 16:04:40.766439    6986 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3844,"bootTime":1725141636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:40.766511    6986 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:40.771225    6986 out.go:177] * [embed-certs-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:40.778192    6986 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:40.778289    6986 notify.go:220] Checking for updates...
	I0831 16:04:40.785104    6986 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:40.788082    6986 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:40.791157    6986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:40.794184    6986 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:40.797111    6986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:40.800427    6986 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:40.800481    6986 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:04:40.800544    6986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:40.804134    6986 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:04:40.811098    6986 start.go:297] selected driver: qemu2
	I0831 16:04:40.811104    6986 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:04:40.811110    6986 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:40.813462    6986 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:04:40.817214    6986 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:04:40.820265    6986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:40.820309    6986 cni.go:84] Creating CNI manager for ""
	I0831 16:04:40.820316    6986 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:04:40.820320    6986 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:04:40.820360    6986 start.go:340] cluster config:
	{Name:embed-certs-169000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-169000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:40.823919    6986 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:40.831175    6986 out.go:177] * Starting "embed-certs-169000" primary control-plane node in "embed-certs-169000" cluster
	I0831 16:04:40.835095    6986 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:04:40.835108    6986 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:04:40.835118    6986 cache.go:56] Caching tarball of preloaded images
	I0831 16:04:40.835174    6986 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:04:40.835180    6986 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:04:40.835241    6986 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/embed-certs-169000/config.json ...
	I0831 16:04:40.835251    6986 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/embed-certs-169000/config.json: {Name:mka834ca5362131e2b29e113249fead7323f4e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:04:40.835457    6986 start.go:360] acquireMachinesLock for embed-certs-169000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:40.835489    6986 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "embed-certs-169000"
	I0831 16:04:40.835498    6986 start.go:93] Provisioning new machine with config: &{Name:embed-certs-169000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-169000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:40.835526    6986 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:40.844064    6986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:40.860759    6986 start.go:159] libmachine.API.Create for "embed-certs-169000" (driver="qemu2")
	I0831 16:04:40.860787    6986 client.go:168] LocalClient.Create starting
	I0831 16:04:40.860852    6986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:40.860883    6986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:40.860893    6986 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:40.860928    6986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:40.860952    6986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:40.860959    6986 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:40.861311    6986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:41.035095    6986 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:41.136981    6986 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:41.136987    6986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:41.137190    6986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:41.146633    6986 main.go:141] libmachine: STDOUT: 
	I0831 16:04:41.146651    6986 main.go:141] libmachine: STDERR: 
	I0831 16:04:41.146693    6986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2 +20000M
	I0831 16:04:41.154592    6986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:41.154618    6986 main.go:141] libmachine: STDERR: 
	I0831 16:04:41.154634    6986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:41.154638    6986 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:41.154648    6986 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:41.154683    6986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:61:80:40:a0:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:41.156275    6986 main.go:141] libmachine: STDOUT: 
	I0831 16:04:41.156290    6986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:41.156310    6986 client.go:171] duration metric: took 295.525833ms to LocalClient.Create
	I0831 16:04:43.158455    6986 start.go:128] duration metric: took 2.322965917s to createHost
	I0831 16:04:43.158522    6986 start.go:83] releasing machines lock for "embed-certs-169000", held for 2.323093667s
	W0831 16:04:43.158601    6986 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:43.170325    6986 out.go:177] * Deleting "embed-certs-169000" in qemu2 ...
	W0831 16:04:43.198800    6986 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:43.198823    6986 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:48.201020    6986 start.go:360] acquireMachinesLock for embed-certs-169000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:48.201548    6986 start.go:364] duration metric: took 430µs to acquireMachinesLock for "embed-certs-169000"
	I0831 16:04:48.201628    6986 start.go:93] Provisioning new machine with config: &{Name:embed-certs-169000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-169000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:48.201990    6986 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:48.210609    6986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:48.262299    6986 start.go:159] libmachine.API.Create for "embed-certs-169000" (driver="qemu2")
	I0831 16:04:48.262348    6986 client.go:168] LocalClient.Create starting
	I0831 16:04:48.262456    6986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:48.262516    6986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:48.262530    6986 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:48.262596    6986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:48.262643    6986 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:48.262654    6986 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:48.263196    6986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:04:48.433091    6986 main.go:141] libmachine: Creating SSH key...
	I0831 16:04:48.517633    6986 main.go:141] libmachine: Creating Disk image...
	I0831 16:04:48.517640    6986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:04:48.517845    6986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:48.527503    6986 main.go:141] libmachine: STDOUT: 
	I0831 16:04:48.527528    6986 main.go:141] libmachine: STDERR: 
	I0831 16:04:48.527588    6986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2 +20000M
	I0831 16:04:48.535983    6986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:04:48.535999    6986 main.go:141] libmachine: STDERR: 
	I0831 16:04:48.536016    6986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:48.536022    6986 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:04:48.536033    6986 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:48.536061    6986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6d:fa:33:7a:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:48.537703    6986 main.go:141] libmachine: STDOUT: 
	I0831 16:04:48.537720    6986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:48.537735    6986 client.go:171] duration metric: took 275.391333ms to LocalClient.Create
	I0831 16:04:50.539799    6986 start.go:128] duration metric: took 2.33782875s to createHost
	I0831 16:04:50.539841    6986 start.go:83] releasing machines lock for "embed-certs-169000", held for 2.338339042s
	W0831 16:04:50.540087    6986 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:50.554505    6986 out.go:201] 
	W0831 16:04:50.557546    6986 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:50.557572    6986 out.go:270] * 
	* 
	W0831 16:04:50.559275    6986 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:50.569294    6986 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (50.706209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-169000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-169000 create -f testdata/busybox.yaml: exit status 1 (28.776917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-169000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-169000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (29.697834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (29.573125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-169000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-169000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-169000 describe deploy/metrics-server -n kube-system: exit status 1 (26.928542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-169000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-169000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (30.269833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.178010125s)

                                                
                                                
-- stdout --
	* [embed-certs-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-169000" primary control-plane node in "embed-certs-169000" cluster
	* Restarting existing qemu2 VM for "embed-certs-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:54.040157    7042 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:54.040283    7042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:54.040286    7042 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:54.040289    7042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:54.040414    7042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:54.041411    7042 out.go:352] Setting JSON to false
	I0831 16:04:54.057506    7042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3858,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:54.057571    7042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:54.061475    7042 out.go:177] * [embed-certs-169000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:54.068423    7042 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:54.068463    7042 notify.go:220] Checking for updates...
	I0831 16:04:54.075355    7042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:54.078328    7042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:54.081422    7042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:54.084397    7042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:54.087355    7042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:54.090618    7042 config.go:182] Loaded profile config "embed-certs-169000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:54.090879    7042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:54.093247    7042 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 16:04:54.100413    7042 start.go:297] selected driver: qemu2
	I0831 16:04:54.100419    7042 start.go:901] validating driver "qemu2" against &{Name:embed-certs-169000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-169000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:54.100468    7042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:54.102575    7042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:54.102600    7042 cni.go:84] Creating CNI manager for ""
	I0831 16:04:54.102607    7042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:04:54.102637    7042 start.go:340] cluster config:
	{Name:embed-certs-169000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-169000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:54.105897    7042 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:54.113396    7042 out.go:177] * Starting "embed-certs-169000" primary control-plane node in "embed-certs-169000" cluster
	I0831 16:04:54.117353    7042 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:04:54.117368    7042 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:04:54.117375    7042 cache.go:56] Caching tarball of preloaded images
	I0831 16:04:54.117431    7042 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:04:54.117437    7042 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:04:54.117485    7042 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/embed-certs-169000/config.json ...
	I0831 16:04:54.117947    7042 start.go:360] acquireMachinesLock for embed-certs-169000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:54.117987    7042 start.go:364] duration metric: took 32.667µs to acquireMachinesLock for "embed-certs-169000"
	I0831 16:04:54.117995    7042 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:54.118000    7042 fix.go:54] fixHost starting: 
	I0831 16:04:54.118110    7042 fix.go:112] recreateIfNeeded on embed-certs-169000: state=Stopped err=<nil>
	W0831 16:04:54.118118    7042 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:54.122426    7042 out.go:177] * Restarting existing qemu2 VM for "embed-certs-169000" ...
	I0831 16:04:54.130304    7042 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:54.130338    7042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6d:fa:33:7a:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:54.132165    7042 main.go:141] libmachine: STDOUT: 
	I0831 16:04:54.132184    7042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:54.132212    7042 fix.go:56] duration metric: took 14.213375ms for fixHost
	I0831 16:04:54.132216    7042 start.go:83] releasing machines lock for "embed-certs-169000", held for 14.225125ms
	W0831 16:04:54.132223    7042 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:54.132252    7042 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:54.132256    7042 start.go:729] Will try again in 5 seconds ...
	I0831 16:04:59.134310    7042 start.go:360] acquireMachinesLock for embed-certs-169000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:59.134906    7042 start.go:364] duration metric: took 510.708µs to acquireMachinesLock for "embed-certs-169000"
	I0831 16:04:59.135089    7042 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:04:59.135112    7042 fix.go:54] fixHost starting: 
	I0831 16:04:59.135787    7042 fix.go:112] recreateIfNeeded on embed-certs-169000: state=Stopped err=<nil>
	W0831 16:04:59.135812    7042 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:04:59.139383    7042 out.go:177] * Restarting existing qemu2 VM for "embed-certs-169000" ...
	I0831 16:04:59.146440    7042 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:04:59.146645    7042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6d:fa:33:7a:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/embed-certs-169000/disk.qcow2
	I0831 16:04:59.155378    7042 main.go:141] libmachine: STDOUT: 
	I0831 16:04:59.155450    7042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:04:59.155529    7042 fix.go:56] duration metric: took 20.420208ms for fixHost
	I0831 16:04:59.155548    7042 start.go:83] releasing machines lock for "embed-certs-169000", held for 20.61725ms
	W0831 16:04:59.155793    7042 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:04:59.164290    7042 out.go:201] 
	W0831 16:04:59.167277    7042 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:04:59.167325    7042 out.go:270] * 
	* 
	W0831 16:04:59.169159    7042 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:04:59.179276    7042 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-169000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (60.2715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-169000" does not exist
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (31.053708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-169000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-169000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-169000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.205125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-169000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-169000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (28.868916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-169000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (29.34ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-169000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-169000 --alsologtostderr -v=1: exit status 83 (39.840125ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-169000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-169000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:59.436870    7061 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:59.437018    7061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:59.437021    7061 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:59.437024    7061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:59.437149    7061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:59.437365    7061 out.go:352] Setting JSON to false
	I0831 16:04:59.437372    7061 mustload.go:65] Loading cluster: embed-certs-169000
	I0831 16:04:59.437558    7061 config.go:182] Loaded profile config "embed-certs-169000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:59.442502    7061 out.go:177] * The control-plane node embed-certs-169000 host is not running: state=Stopped
	I0831 16:04:59.445278    7061 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-169000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-169000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (29.759667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (29.697917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "embed-certs-169000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.901522958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-646000" primary control-plane node in "default-k8s-diff-port-646000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:04:59.862875    7085 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:04:59.863007    7085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:59.863011    7085 out.go:358] Setting ErrFile to fd 2...
	I0831 16:04:59.863014    7085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:04:59.863157    7085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:04:59.864220    7085 out.go:352] Setting JSON to false
	I0831 16:04:59.880758    7085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3863,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:04:59.880835    7085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:04:59.885958    7085 out.go:177] * [default-k8s-diff-port-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:04:59.891887    7085 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:04:59.891924    7085 notify.go:220] Checking for updates...
	I0831 16:04:59.899779    7085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:04:59.903815    7085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:04:59.906872    7085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:04:59.909857    7085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:04:59.912839    7085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:04:59.916138    7085 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:04:59.916193    7085 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0831 16:04:59.916248    7085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:04:59.919736    7085 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:04:59.926888    7085 start.go:297] selected driver: qemu2
	I0831 16:04:59.926893    7085 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:04:59.926898    7085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:04:59.929113    7085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 16:04:59.932834    7085 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:04:59.935936    7085 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:04:59.935974    7085 cni.go:84] Creating CNI manager for ""
	I0831 16:04:59.935983    7085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:04:59.935988    7085 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:04:59.936017    7085 start.go:340] cluster config:
	{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:04:59.939487    7085 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:04:59.947798    7085 out.go:177] * Starting "default-k8s-diff-port-646000" primary control-plane node in "default-k8s-diff-port-646000" cluster
	I0831 16:04:59.951905    7085 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:04:59.951921    7085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:04:59.951935    7085 cache.go:56] Caching tarball of preloaded images
	I0831 16:04:59.951994    7085 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:04:59.952011    7085 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:04:59.952073    7085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/default-k8s-diff-port-646000/config.json ...
	I0831 16:04:59.952084    7085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/default-k8s-diff-port-646000/config.json: {Name:mk52fc7c140d8743e5ecf1614f2efcf75efe49fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:04:59.952303    7085 start.go:360] acquireMachinesLock for default-k8s-diff-port-646000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:04:59.952336    7085 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "default-k8s-diff-port-646000"
	I0831 16:04:59.952346    7085 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:04:59.952387    7085 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:04:59.959836    7085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:04:59.975484    7085 start.go:159] libmachine.API.Create for "default-k8s-diff-port-646000" (driver="qemu2")
	I0831 16:04:59.975508    7085 client.go:168] LocalClient.Create starting
	I0831 16:04:59.975575    7085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:04:59.975606    7085 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:59.975615    7085 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:59.975652    7085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:04:59.975682    7085 main.go:141] libmachine: Decoding PEM data...
	I0831 16:04:59.975687    7085 main.go:141] libmachine: Parsing certificate...
	I0831 16:04:59.976027    7085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:05:00.137678    7085 main.go:141] libmachine: Creating SSH key...
	I0831 16:05:00.194556    7085 main.go:141] libmachine: Creating Disk image...
	I0831 16:05:00.194561    7085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:05:00.194744    7085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:00.204097    7085 main.go:141] libmachine: STDOUT: 
	I0831 16:05:00.204119    7085 main.go:141] libmachine: STDERR: 
	I0831 16:05:00.204174    7085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2 +20000M
	I0831 16:05:00.212235    7085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:05:00.212251    7085 main.go:141] libmachine: STDERR: 
	I0831 16:05:00.212266    7085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:00.212278    7085 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:05:00.212293    7085 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:00.212318    7085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:9b:b1:3e:3d:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:00.213955    7085 main.go:141] libmachine: STDOUT: 
	I0831 16:05:00.213970    7085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:00.213991    7085 client.go:171] duration metric: took 238.486ms to LocalClient.Create
	I0831 16:05:02.216139    7085 start.go:128] duration metric: took 2.263796542s to createHost
	I0831 16:05:02.216194    7085 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 2.263914667s
	W0831 16:05:02.216266    7085 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:02.233883    7085 out.go:177] * Deleting "default-k8s-diff-port-646000" in qemu2 ...
	W0831 16:05:02.263309    7085 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:02.263331    7085 start.go:729] Will try again in 5 seconds ...
	I0831 16:05:07.265343    7085 start.go:360] acquireMachinesLock for default-k8s-diff-port-646000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:07.265708    7085 start.go:364] duration metric: took 282.75µs to acquireMachinesLock for "default-k8s-diff-port-646000"
	I0831 16:05:07.265814    7085 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:05:07.266083    7085 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:05:07.277669    7085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:05:07.326810    7085 start.go:159] libmachine.API.Create for "default-k8s-diff-port-646000" (driver="qemu2")
	I0831 16:05:07.326849    7085 client.go:168] LocalClient.Create starting
	I0831 16:05:07.326962    7085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:05:07.327028    7085 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:07.327049    7085 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:07.327106    7085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:05:07.327150    7085 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:07.327170    7085 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:07.327659    7085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:05:07.494236    7085 main.go:141] libmachine: Creating SSH key...
	I0831 16:05:07.662368    7085 main.go:141] libmachine: Creating Disk image...
	I0831 16:05:07.662377    7085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:05:07.662588    7085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:07.671953    7085 main.go:141] libmachine: STDOUT: 
	I0831 16:05:07.671970    7085 main.go:141] libmachine: STDERR: 
	I0831 16:05:07.672027    7085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2 +20000M
	I0831 16:05:07.679814    7085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:05:07.679836    7085 main.go:141] libmachine: STDERR: 
	I0831 16:05:07.679847    7085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:07.679852    7085 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:05:07.679862    7085 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:07.679889    7085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:69:84:e0:09:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:07.681409    7085 main.go:141] libmachine: STDOUT: 
	I0831 16:05:07.681422    7085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:07.681435    7085 client.go:171] duration metric: took 354.591709ms to LocalClient.Create
	I0831 16:05:09.683538    7085 start.go:128] duration metric: took 2.417502166s to createHost
	I0831 16:05:09.683589    7085 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 2.417928166s
	W0831 16:05:09.683965    7085 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:09.705632    7085 out.go:201] 
	W0831 16:05:09.713968    7085 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:09.714005    7085 out.go:270] * 
	* 
	W0831 16:05:09.716143    7085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:05:09.726622    7085 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (49.79075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.627920125s)

                                                
                                                
-- stdout --
	* [newest-cni-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-648000" primary control-plane node in "newest-cni-648000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-648000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:05:01.572493    7098 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:05:01.572615    7098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:01.572619    7098 out.go:358] Setting ErrFile to fd 2...
	I0831 16:05:01.572621    7098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:01.572765    7098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:05:01.573884    7098 out.go:352] Setting JSON to false
	I0831 16:05:01.589999    7098 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3865,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:05:01.590069    7098 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:05:01.594924    7098 out.go:177] * [newest-cni-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:05:01.603903    7098 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:05:01.603936    7098 notify.go:220] Checking for updates...
	I0831 16:05:01.611811    7098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:05:01.614854    7098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:05:01.618703    7098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:05:01.621833    7098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:05:01.624837    7098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:05:01.628235    7098 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:01.628306    7098 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:01.628364    7098 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:05:01.631825    7098 out.go:177] * Using the qemu2 driver based on user configuration
	I0831 16:05:01.638868    7098 start.go:297] selected driver: qemu2
	I0831 16:05:01.638876    7098 start.go:901] validating driver "qemu2" against <nil>
	I0831 16:05:01.638882    7098 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:05:01.641326    7098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0831 16:05:01.641350    7098 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0831 16:05:01.648881    7098 out.go:177] * Automatically selected the socket_vmnet network
	I0831 16:05:01.651943    7098 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0831 16:05:01.651989    7098 cni.go:84] Creating CNI manager for ""
	I0831 16:05:01.652002    7098 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:05:01.652007    7098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 16:05:01.652047    7098 start.go:340] cluster config:
	{Name:newest-cni-648000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:05:01.655860    7098 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:05:01.664881    7098 out.go:177] * Starting "newest-cni-648000" primary control-plane node in "newest-cni-648000" cluster
	I0831 16:05:01.668806    7098 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:05:01.668823    7098 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:05:01.668837    7098 cache.go:56] Caching tarball of preloaded images
	I0831 16:05:01.668908    7098 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:05:01.668915    7098 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:05:01.668988    7098 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/newest-cni-648000/config.json ...
	I0831 16:05:01.669005    7098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/newest-cni-648000/config.json: {Name:mkbea495a1dfa045ea13e24eb287041dd0677e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 16:05:01.669245    7098 start.go:360] acquireMachinesLock for newest-cni-648000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:02.216351    7098 start.go:364] duration metric: took 547.070625ms to acquireMachinesLock for "newest-cni-648000"
	I0831 16:05:02.216489    7098 start.go:93] Provisioning new machine with config: &{Name:newest-cni-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:05:02.216744    7098 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:05:02.225837    7098 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:05:02.276896    7098 start.go:159] libmachine.API.Create for "newest-cni-648000" (driver="qemu2")
	I0831 16:05:02.276942    7098 client.go:168] LocalClient.Create starting
	I0831 16:05:02.277062    7098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:05:02.277117    7098 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:02.277136    7098 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:02.277210    7098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:05:02.277255    7098 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:02.277270    7098 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:02.277897    7098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:05:02.453081    7098 main.go:141] libmachine: Creating SSH key...
	I0831 16:05:02.533130    7098 main.go:141] libmachine: Creating Disk image...
	I0831 16:05:02.533135    7098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:05:02.533325    7098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:02.542445    7098 main.go:141] libmachine: STDOUT: 
	I0831 16:05:02.542465    7098 main.go:141] libmachine: STDERR: 
	I0831 16:05:02.542514    7098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2 +20000M
	I0831 16:05:02.550386    7098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:05:02.550404    7098 main.go:141] libmachine: STDERR: 
	I0831 16:05:02.550418    7098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:02.550422    7098 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:05:02.550433    7098 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:02.550463    7098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:68:42:2e:08:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:02.552095    7098 main.go:141] libmachine: STDOUT: 
	I0831 16:05:02.552113    7098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:02.552130    7098 client.go:171] duration metric: took 275.190542ms to LocalClient.Create
	I0831 16:05:04.554249    7098 start.go:128] duration metric: took 2.337543417s to createHost
	I0831 16:05:04.554341    7098 start.go:83] releasing machines lock for "newest-cni-648000", held for 2.3380205s
	W0831 16:05:04.554411    7098 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:04.570736    7098 out.go:177] * Deleting "newest-cni-648000" in qemu2 ...
	W0831 16:05:04.614583    7098 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:04.614618    7098 start.go:729] Will try again in 5 seconds ...
	I0831 16:05:09.615376    7098 start.go:360] acquireMachinesLock for newest-cni-648000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:09.683726    7098 start.go:364] duration metric: took 68.212084ms to acquireMachinesLock for "newest-cni-648000"
	I0831 16:05:09.683967    7098 start.go:93] Provisioning new machine with config: &{Name:newest-cni-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 16:05:09.684202    7098 start.go:125] createHost starting for "" (driver="qemu2")
	I0831 16:05:09.701525    7098 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0831 16:05:09.753723    7098 start.go:159] libmachine.API.Create for "newest-cni-648000" (driver="qemu2")
	I0831 16:05:09.753772    7098 client.go:168] LocalClient.Create starting
	I0831 16:05:09.753863    7098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/ca.pem
	I0831 16:05:09.753914    7098 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:09.753931    7098 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:09.753989    7098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18943-972/.minikube/certs/cert.pem
	I0831 16:05:09.754020    7098 main.go:141] libmachine: Decoding PEM data...
	I0831 16:05:09.754032    7098 main.go:141] libmachine: Parsing certificate...
	I0831 16:05:09.754486    7098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18943-972/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso...
	I0831 16:05:10.005110    7098 main.go:141] libmachine: Creating SSH key...
	I0831 16:05:10.100791    7098 main.go:141] libmachine: Creating Disk image...
	I0831 16:05:10.100802    7098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0831 16:05:10.100989    7098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:10.110086    7098 main.go:141] libmachine: STDOUT: 
	I0831 16:05:10.110104    7098 main.go:141] libmachine: STDERR: 
	I0831 16:05:10.110157    7098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2 +20000M
	I0831 16:05:10.118058    7098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0831 16:05:10.118072    7098 main.go:141] libmachine: STDERR: 
	I0831 16:05:10.118086    7098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:10.118091    7098 main.go:141] libmachine: Starting QEMU VM...
	I0831 16:05:10.118106    7098 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:10.118133    7098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a5:79:9f:33:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:10.119803    7098 main.go:141] libmachine: STDOUT: 
	I0831 16:05:10.119821    7098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:10.119833    7098 client.go:171] duration metric: took 366.0675ms to LocalClient.Create
	I0831 16:05:12.121896    7098 start.go:128] duration metric: took 2.437709167s to createHost
	I0831 16:05:12.121918    7098 start.go:83] releasing machines lock for "newest-cni-648000", held for 2.438225292s
	W0831 16:05:12.122053    7098 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:12.126368    7098 out.go:201] 
	W0831 16:05:12.138158    7098 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:12.138173    7098 out.go:270] * 
	* 
	W0831 16:05:12.139772    7098 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:05:12.154289    7098 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (57.246125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "newest-cni-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml: exit status 1 (30.716334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (32.973583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (30.756084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-646000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system: exit status 1 (28.54225ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (31.928166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.313375875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-646000" primary control-plane node in "default-k8s-diff-port-646000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:05:11.925775    7144 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:05:11.925941    7144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:11.925944    7144 out.go:358] Setting ErrFile to fd 2...
	I0831 16:05:11.925946    7144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:11.926079    7144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:05:11.927068    7144 out.go:352] Setting JSON to false
	I0831 16:05:11.943161    7144 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3875,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:05:11.943230    7144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:05:11.947655    7144 out.go:177] * [default-k8s-diff-port-646000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:05:11.954543    7144 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:05:11.954592    7144 notify.go:220] Checking for updates...
	I0831 16:05:11.960477    7144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:05:11.963507    7144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:05:11.966483    7144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:05:11.969415    7144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:05:11.972932    7144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:05:11.975750    7144 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:11.976001    7144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:05:11.979455    7144 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 16:05:11.986452    7144 start.go:297] selected driver: qemu2
	I0831 16:05:11.986464    7144 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:05:11.986523    7144 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:05:11.988839    7144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 16:05:11.988882    7144 cni.go:84] Creating CNI manager for ""
	I0831 16:05:11.988889    7144 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:05:11.988917    7144 start.go:340] cluster config:
	{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-646000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:05:11.992379    7144 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:05:11.999528    7144 out.go:177] * Starting "default-k8s-diff-port-646000" primary control-plane node in "default-k8s-diff-port-646000" cluster
	I0831 16:05:12.003412    7144 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:05:12.003428    7144 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:05:12.003443    7144 cache.go:56] Caching tarball of preloaded images
	I0831 16:05:12.003509    7144 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:05:12.003514    7144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:05:12.003576    7144 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/default-k8s-diff-port-646000/config.json ...
	I0831 16:05:12.004042    7144 start.go:360] acquireMachinesLock for default-k8s-diff-port-646000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:12.121974    7144 start.go:364] duration metric: took 117.923375ms to acquireMachinesLock for "default-k8s-diff-port-646000"
	I0831 16:05:12.122016    7144 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:05:12.122044    7144 fix.go:54] fixHost starting: 
	I0831 16:05:12.122364    7144 fix.go:112] recreateIfNeeded on default-k8s-diff-port-646000: state=Stopped err=<nil>
	W0831 16:05:12.122387    7144 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:05:12.133277    7144 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	I0831 16:05:12.142276    7144 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:12.142378    7144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:69:84:e0:09:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:12.148104    7144 main.go:141] libmachine: STDOUT: 
	I0831 16:05:12.148156    7144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:12.148243    7144 fix.go:56] duration metric: took 26.1965ms for fixHost
	I0831 16:05:12.148256    7144 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 26.2575ms
	W0831 16:05:12.148274    7144 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:12.148369    7144 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:12.148383    7144 start.go:729] Will try again in 5 seconds ...
	I0831 16:05:17.150553    7144 start.go:360] acquireMachinesLock for default-k8s-diff-port-646000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:17.151045    7144 start.go:364] duration metric: took 348.917µs to acquireMachinesLock for "default-k8s-diff-port-646000"
	I0831 16:05:17.151194    7144 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:05:17.151214    7144 fix.go:54] fixHost starting: 
	I0831 16:05:17.151964    7144 fix.go:112] recreateIfNeeded on default-k8s-diff-port-646000: state=Stopped err=<nil>
	W0831 16:05:17.151996    7144 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:05:17.161539    7144 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	I0831 16:05:17.164464    7144 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:17.164687    7144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:69:84:e0:09:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0831 16:05:17.173341    7144 main.go:141] libmachine: STDOUT: 
	I0831 16:05:17.173419    7144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:17.173513    7144 fix.go:56] duration metric: took 22.2975ms for fixHost
	I0831 16:05:17.173541    7144 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 22.45825ms
	W0831 16:05:17.173704    7144 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:17.182610    7144 out.go:201] 
	W0831 16:05:17.186773    7144 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:17.186798    7144 out.go:270] * 
	* 
	W0831 16:05:17.189155    7144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:05:17.196575    7144 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (66.316666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.182035167s)

                                                
                                                
-- stdout --
	* [newest-cni-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-648000" primary control-plane node in "newest-cni-648000" cluster
	* Restarting existing qemu2 VM for "newest-cni-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:05:15.736323    7177 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:05:15.736436    7177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:15.736440    7177 out.go:358] Setting ErrFile to fd 2...
	I0831 16:05:15.736442    7177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:15.736567    7177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:05:15.737572    7177 out.go:352] Setting JSON to false
	I0831 16:05:15.753644    7177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3879,"bootTime":1725141636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 16:05:15.753713    7177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 16:05:15.757225    7177 out.go:177] * [newest-cni-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 16:05:15.763223    7177 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 16:05:15.763256    7177 notify.go:220] Checking for updates...
	I0831 16:05:15.771170    7177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 16:05:15.774139    7177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 16:05:15.777214    7177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 16:05:15.780251    7177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 16:05:15.783214    7177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 16:05:15.786462    7177 config.go:182] Loaded profile config "newest-cni-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:15.786718    7177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 16:05:15.790225    7177 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 16:05:15.797142    7177 start.go:297] selected driver: qemu2
	I0831 16:05:15.797148    7177 start.go:901] validating driver "qemu2" against &{Name:newest-cni-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:05:15.797196    7177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 16:05:15.799395    7177 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0831 16:05:15.799436    7177 cni.go:84] Creating CNI manager for ""
	I0831 16:05:15.799443    7177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 16:05:15.799466    7177 start.go:340] cluster config:
	{Name:newest-cni-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-648000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 16:05:15.802949    7177 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 16:05:15.811159    7177 out.go:177] * Starting "newest-cni-648000" primary control-plane node in "newest-cni-648000" cluster
	I0831 16:05:15.815235    7177 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 16:05:15.815247    7177 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 16:05:15.815257    7177 cache.go:56] Caching tarball of preloaded images
	I0831 16:05:15.815300    7177 preload.go:172] Found /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 16:05:15.815305    7177 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 16:05:15.815358    7177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/newest-cni-648000/config.json ...
	I0831 16:05:15.815824    7177 start.go:360] acquireMachinesLock for newest-cni-648000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:15.815851    7177 start.go:364] duration metric: took 20.791µs to acquireMachinesLock for "newest-cni-648000"
	I0831 16:05:15.815861    7177 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:05:15.815866    7177 fix.go:54] fixHost starting: 
	I0831 16:05:15.815981    7177 fix.go:112] recreateIfNeeded on newest-cni-648000: state=Stopped err=<nil>
	W0831 16:05:15.815989    7177 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:05:15.820182    7177 out.go:177] * Restarting existing qemu2 VM for "newest-cni-648000" ...
	I0831 16:05:15.827181    7177 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:15.827214    7177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a5:79:9f:33:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:15.829182    7177 main.go:141] libmachine: STDOUT: 
	I0831 16:05:15.829203    7177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:15.829234    7177 fix.go:56] duration metric: took 13.368708ms for fixHost
	I0831 16:05:15.829240    7177 start.go:83] releasing machines lock for "newest-cni-648000", held for 13.385584ms
	W0831 16:05:15.829246    7177 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:15.829286    7177 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:15.829290    7177 start.go:729] Will try again in 5 seconds ...
	I0831 16:05:20.831406    7177 start.go:360] acquireMachinesLock for newest-cni-648000: {Name:mk7d928f36022e2d5b6e6a0b65f96bf3d1fc002a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 16:05:20.831977    7177 start.go:364] duration metric: took 453.417µs to acquireMachinesLock for "newest-cni-648000"
	I0831 16:05:20.832128    7177 start.go:96] Skipping create...Using existing machine configuration
	I0831 16:05:20.832149    7177 fix.go:54] fixHost starting: 
	I0831 16:05:20.832878    7177 fix.go:112] recreateIfNeeded on newest-cni-648000: state=Stopped err=<nil>
	W0831 16:05:20.832902    7177 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 16:05:20.836365    7177 out.go:177] * Restarting existing qemu2 VM for "newest-cni-648000" ...
	I0831 16:05:20.846168    7177 qemu.go:418] Using hvf for hardware acceleration
	I0831 16:05:20.846530    7177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a5:79:9f:33:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18943-972/.minikube/machines/newest-cni-648000/disk.qcow2
	I0831 16:05:20.856707    7177 main.go:141] libmachine: STDOUT: 
	I0831 16:05:20.856785    7177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0831 16:05:20.856883    7177 fix.go:56] duration metric: took 24.735708ms for fixHost
	I0831 16:05:20.856903    7177 start.go:83] releasing machines lock for "newest-cni-648000", held for 24.899875ms
	W0831 16:05:20.857199    7177 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0831 16:05:20.865354    7177 out.go:201] 
	W0831 16:05:20.867002    7177 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0831 16:05:20.867043    7177 out.go:270] * 
	* 
	W0831 16:05:20.869415    7177 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 16:05:20.877267    7177 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-648000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (70.481667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "newest-cni-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-646000" does not exist
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (32.412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-646000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.496875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (29.562959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-646000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (29.365792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1: exit status 83 (41.083458ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-646000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:05:17.463874    7196 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:05:17.464015    7196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:17.464018    7196 out.go:358] Setting ErrFile to fd 2...
	I0831 16:05:17.464020    7196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:17.464151    7196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:05:17.464367    7196 out.go:352] Setting JSON to false
	I0831 16:05:17.464372    7196 mustload.go:65] Loading cluster: default-k8s-diff-port-646000
	I0831 16:05:17.464585    7196 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:17.469198    7196 out.go:177] * The control-plane node default-k8s-diff-port-646000 host is not running: state=Stopped
	I0831 16:05:17.473099    7196 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (29.7305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (28.481458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-648000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (29.4945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "newest-cni-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-648000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-648000 --alsologtostderr -v=1: exit status 83 (42.217083ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-648000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 16:05:21.063099    7220 out.go:345] Setting OutFile to fd 1 ...
	I0831 16:05:21.063264    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:21.063267    7220 out.go:358] Setting ErrFile to fd 2...
	I0831 16:05:21.063269    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 16:05:21.063379    7220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 16:05:21.063596    7220 out.go:352] Setting JSON to false
	I0831 16:05:21.063606    7220 mustload.go:65] Loading cluster: newest-cni-648000
	I0831 16:05:21.063796    7220 config.go:182] Loaded profile config "newest-cni-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 16:05:21.068556    7220 out.go:177] * The control-plane node newest-cni-648000 host is not running: state=Stopped
	I0831 16:05:21.072456    7220 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-648000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-648000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (30.205041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "newest-cni-648000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (30.314834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "newest-cni-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/275)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 7.13
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 138.15
29 TestAddons/serial/Volcano 38.33
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 18.52
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.29
39 TestAddons/parallel/CSI 47.58
40 TestAddons/parallel/Headlamp 16.65
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 53.07
43 TestAddons/parallel/NvidiaDevicePlugin 6.18
44 TestAddons/parallel/Yakd 10.27
45 TestAddons/StoppedEnableDisable 9.4
53 TestHyperKitDriverInstallOrUpdate 10.76
56 TestErrorSpam/setup 35.33
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.68
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 55.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 42.27
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.23
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.74
73 TestFunctional/serial/CacheCmd/cache/add_local 1.17
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.74
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1
81 TestFunctional/serial/ExtraConfig 32.94
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.65
84 TestFunctional/serial/LogsFileCmd 0.66
85 TestFunctional/serial/InvalidService 4.42
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 6.23
89 TestFunctional/parallel/DryRun 0.24
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.09
97 TestFunctional/parallel/PersistentVolumeClaim 25.6
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.44
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
111 TestFunctional/parallel/License 0.29
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.17
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.71
119 TestFunctional/parallel/ImageCommands/Setup 1.69
120 TestFunctional/parallel/DockerEnv/bash 0.29
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.39
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.35
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.63
152 TestFunctional/parallel/MountCmd/specific-port 0.97
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.97
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 186.61
161 TestMultiControlPlane/serial/DeployApp 4.34
162 TestMultiControlPlane/serial/PingHostFromPods 0.7
163 TestMultiControlPlane/serial/AddWorkerNode 53.75
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.21
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.9
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.38
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
260 TestStoppedBinaryUpgrade/Setup 0.91
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
277 TestNoKubernetes/serial/ProfileList 31.45
278 TestNoKubernetes/serial/Stop 2.93
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStartStop/group/old-k8s-version/serial/Stop 2.07
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
303 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
306 TestStartStop/group/no-preload/serial/Stop 3.3
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/embed-certs/serial/Stop 3.06
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.76
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
335 TestStartStop/group/newest-cni/serial/Stop 3.3
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-445000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-445000: exit status 85 (92.66425ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:04 PDT |          |
	|         | -p download-only-445000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 15:04:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 15:04:53.212301    1434 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:04:53.212436    1434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:04:53.212440    1434 out.go:358] Setting ErrFile to fd 2...
	I0831 15:04:53.212443    1434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:04:53.212563    1434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	W0831 15:04:53.212642    1434 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18943-972/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18943-972/.minikube/config/config.json: no such file or directory
	I0831 15:04:53.213857    1434 out.go:352] Setting JSON to true
	I0831 15:04:53.231229    1434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":257,"bootTime":1725141636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:04:53.231291    1434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:04:53.235858    1434 out.go:97] [download-only-445000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:04:53.236036    1434 notify.go:220] Checking for updates...
	W0831 15:04:53.236024    1434 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 15:04:53.239726    1434 out.go:169] MINIKUBE_LOCATION=18943
	I0831 15:04:53.242766    1434 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:04:53.247752    1434 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:04:53.250803    1434 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:04:53.253781    1434 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	W0831 15:04:53.259817    1434 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 15:04:53.260038    1434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:04:53.264799    1434 out.go:97] Using the qemu2 driver based on user configuration
	I0831 15:04:53.264819    1434 start.go:297] selected driver: qemu2
	I0831 15:04:53.264823    1434 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:04:53.264919    1434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:04:53.268786    1434 out.go:169] Automatically selected the socket_vmnet network
	I0831 15:04:53.274571    1434 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0831 15:04:53.274647    1434 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:04:53.274726    1434 cni.go:84] Creating CNI manager for ""
	I0831 15:04:53.274743    1434 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 15:04:53.274784    1434 start.go:340] cluster config:
	{Name:download-only-445000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-445000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:04:53.279852    1434 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:04:53.284805    1434 out.go:97] Downloading VM boot image ...
	I0831 15:04:53.284823    1434 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/iso/arm64/minikube-v1.33.1-1724862017-19530-arm64.iso
	I0831 15:04:58.394705    1434 out.go:97] Starting "download-only-445000" primary control-plane node in "download-only-445000" cluster
	I0831 15:04:58.394723    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:04:58.456694    1434 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 15:04:58.456730    1434 cache.go:56] Caching tarball of preloaded images
	I0831 15:04:58.456908    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:04:58.460948    1434 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 15:04:58.460955    1434 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:04:58.539868    1434 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 15:05:04.741593    1434 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:05:04.741766    1434 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:05:05.436531    1434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 15:05:05.436712    1434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/download-only-445000/config.json ...
	I0831 15:05:05.436728    1434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/download-only-445000/config.json: {Name:mk7632ce36e65a62ce12d632696257046a80f598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 15:05:05.436953    1434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 15:05:05.437143    1434 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0831 15:05:05.966382    1434 out.go:193] 
	W0831 15:05:05.974431    1434 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18943-972/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920 0x104be7920] Decompressors:map[bz2:0x14000883af0 gz:0x14000883af8 tar:0x14000883aa0 tar.bz2:0x14000883ab0 tar.gz:0x14000883ac0 tar.xz:0x14000883ad0 tar.zst:0x14000883ae0 tbz2:0x14000883ab0 tgz:0x14000883ac0 txz:0x14000883ad0 tzst:0x14000883ae0 xz:0x14000883b00 zip:0x14000883b10 zst:0x14000883b08] Getters:map[file:0x140009d8df0 http:0x140007565f0 https:0x14000756640] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0831 15:05:05.974454    1434 out_reason.go:110] 
	W0831 15:05:05.983133    1434 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0831 15:05:05.987263    1434 out.go:193] 
	
	
	* The control-plane node download-only-445000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-445000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-445000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-326000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-326000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.134544541s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-326000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-326000: exit status 85 (74.3865ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:04 PDT |                     |
	|         | -p download-only-445000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| delete  | -p download-only-445000        | download-only-445000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT | 31 Aug 24 15:05 PDT |
	| start   | -o=json --download-only        | download-only-326000 | jenkins | v1.33.1 | 31 Aug 24 15:05 PDT |                     |
	|         | -p download-only-326000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 15:05:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 15:05:06.385018    1460 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:05:06.385127    1460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:05:06.385130    1460 out.go:358] Setting ErrFile to fd 2...
	I0831 15:05:06.385133    1460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:05:06.385256    1460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:05:06.386478    1460 out.go:352] Setting JSON to true
	I0831 15:05:06.402604    1460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":270,"bootTime":1725141636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:05:06.402676    1460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:05:06.406714    1460 out.go:97] [download-only-326000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:05:06.406805    1460 notify.go:220] Checking for updates...
	I0831 15:05:06.410534    1460 out.go:169] MINIKUBE_LOCATION=18943
	I0831 15:05:06.413634    1460 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:05:06.416463    1460 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:05:06.419553    1460 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:05:06.422644    1460 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	W0831 15:05:06.428565    1460 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 15:05:06.428718    1460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:05:06.431561    1460 out.go:97] Using the qemu2 driver based on user configuration
	I0831 15:05:06.431568    1460 start.go:297] selected driver: qemu2
	I0831 15:05:06.431572    1460 start.go:901] validating driver "qemu2" against <nil>
	I0831 15:05:06.431614    1460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 15:05:06.432761    1460 out.go:169] Automatically selected the socket_vmnet network
	I0831 15:05:06.437593    1460 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0831 15:05:06.437693    1460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 15:05:06.437727    1460 cni.go:84] Creating CNI manager for ""
	I0831 15:05:06.437734    1460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 15:05:06.437740    1460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 15:05:06.437786    1460 start.go:340] cluster config:
	{Name:download-only-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:05:06.441144    1460 iso.go:125] acquiring lock: {Name:mk8ceb0a45bdb16b7cc1f947d1ec201f47666698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 15:05:06.444623    1460 out.go:97] Starting "download-only-326000" primary control-plane node in "download-only-326000" cluster
	I0831 15:05:06.444632    1460 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:05:06.520643    1460 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 15:05:06.520668    1460 cache.go:56] Caching tarball of preloaded images
	I0831 15:05:06.520868    1460 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 15:05:06.526071    1460 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 15:05:06.526079    1460 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 15:05:06.606776    1460 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/18943-972/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-326000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-326000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-326000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-803000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:176: Cleaning up "binary-mirror-803000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-803000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-492000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-492000: exit status 85 (61.0115ms)

                                                
                                                
-- stdout --
	* Profile "addons-492000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-492000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-492000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-492000: exit status 85 (57.479333ms)

                                                
                                                
-- stdout --
	* Profile "addons-492000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-492000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (138.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-492000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-492000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m18.145835292s)
--- PASS: TestAddons/Setup (138.15s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 6.986625ms
addons_test.go:905: volcano-admission stabilized in 7.3465ms
addons_test.go:913: volcano-controller stabilized in 7.560791ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-scheduler-576bc46687-7pl84" [df5e2e8a-c1f0-4a06-839a-e8bc164c6895] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0055855s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-admission-77d7d48b68-jfvfl" [5ffa38af-1c59-4bc1-bc1c-3f66c1578172] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.005106583s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-controllers-56675bb4d5-nbssr" [64776216-81bc-446a-b2f4-13c563593d94] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003895584s
addons_test.go:932: (dbg) Run:  kubectl --context addons-492000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-492000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-492000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:345: "test-job-nginx-0" [a9b0c792-b223-480e-8d3d-bfdd38e34f1f] Pending
helpers_test.go:345: "test-job-nginx-0" [a9b0c792-b223-480e-8d3d-bfdd38e34f1f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "test-job-nginx-0" [a9b0c792-b223-480e-8d3d-bfdd38e34f1f] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.005128959s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable volcano --alsologtostderr -v=1: (10.112308792s)
--- PASS: TestAddons/serial/Volcano (38.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-492000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-492000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-492000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-492000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-492000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [e303b475-85d7-47f1-ba2e-8f3cd8610c93] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [e303b475-85d7-47f1-ba2e-8f3cd8610c93] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003409209s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-492000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable ingress --alsologtostderr -v=1: (7.231169709s)
--- PASS: TestAddons/parallel/Ingress (18.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-n7lws" [6b7e73c3-2273-4a99-8671-c4f995870bdf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010845333s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-492000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-492000: (5.303026292s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.216375ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-p6wqw" [61c9f92c-d1e6-4024-9b33-3c247eca65f1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010721958s
addons_test.go:417: (dbg) Run:  kubectl --context addons-492000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.981625ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [403e9329-182d-4213-a484-d2caed844c2d] Pending
helpers_test.go:345: "task-pv-pod" [403e9329-182d-4213-a484-d2caed844c2d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [403e9329-182d-4213-a484-d2caed844c2d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.006524916s
addons_test.go:590: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-492000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-492000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-492000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-492000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [a023cbad-7e60-4ab8-ba4f-8ac52d3a60c3] Pending
helpers_test.go:345: "task-pv-pod-restore" [a023cbad-7e60-4ab8-ba4f-8ac52d3a60c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [a023cbad-7e60-4ab8-ba4f-8ac52d3a60c3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007385208s
addons_test.go:632: (dbg) Run:  kubectl --context addons-492000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-492000 delete pod task-pv-pod-restore: (1.094263292s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-492000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-492000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.167455083s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-492000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-btb74" [8393151b-7a74-4233-9a70-1f7a8429a748] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-btb74" [8393151b-7a74-4233-9a70-1f7a8429a748] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-btb74" [8393151b-7a74-4233-9a70-1f7a8429a748] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.010374792s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable headlamp --alsologtostderr -v=1: (5.306471875s)
--- PASS: TestAddons/parallel/Headlamp (16.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-gsrh2" [0e413c5a-8dfd-4f10-9b93-5657fd72355a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010261833s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-492000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-492000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-492000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [7e316686-95da-4af0-ac55-11277952bd8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [7e316686-95da-4af0-ac55-11277952bd8a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [7e316686-95da-4af0-ac55-11277952bd8a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004580958s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-492000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 ssh "cat /opt/local-path-provisioner/pvc-acbd79e1-2d58-42b6-85c3-4dd2c8230a6a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-492000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-492000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.551716375s)
--- PASS: TestAddons/parallel/LocalPath (53.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-xbb9h" [5401db3a-6b96-4340-9c15-bc04f6727ea6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006610834s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-492000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-wn4h6" [7320b62d-708d-4525-a00d-8432ccb6636a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006859667s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-492000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-492000 addons disable yakd --alsologtostderr -v=1: (5.265195292s)
--- PASS: TestAddons/parallel/Yakd (10.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-492000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-492000: (9.208720041s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-492000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-492000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-492000
--- PASS: TestAddons/StoppedEnableDisable (9.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.76s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.76s)

                                                
                                    
x
+
TestErrorSpam/setup (35.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-194000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-194000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 --driver=qemu2 : (35.328840666s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (35.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop: (3.193925833s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop: (26.041000833s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-194000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-194000 stop: (26.035389917s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/18943-972/.minikube/files/etc/test/nested/copy/1432/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-221000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (42.264192375s)
--- PASS: TestFunctional/serial/StartWithProxy (42.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-221000 --alsologtostderr -v=8: (38.233777041s)
functional_test.go:663: soft start took 38.234183584s for "functional-221000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-221000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-221000 cache add registry.k8s.io/pause:3.1: (1.088517875s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2207906617/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache add minikube-local-cache-test:functional-221000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache delete minikube-local-cache-test:functional-221000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-221000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.330584ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 kubectl -- --context functional-221000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-221000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.00s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-221000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.942557958s)
functional_test.go:761: restart took 32.942660708s for "functional-221000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-221000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4087557366/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-221000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-221000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-221000: exit status 115 (128.738292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32012 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-221000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-221000 delete -f testdata/invalidsvc.yaml: (1.195957792s)
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 config get cpus: exit status 14 (29.823833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 config get cpus: exit status 14 (33.055417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-221000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-221000 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 2434: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-221000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.47325ms)

                                                
                                                
-- stdout --
	* [functional-221000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:22:52.224699    2393 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:22:52.224842    2393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:22:52.224846    2393 out.go:358] Setting ErrFile to fd 2...
	I0831 15:22:52.224848    2393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:22:52.224965    2393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:22:52.225973    2393 out.go:352] Setting JSON to false
	I0831 15:22:52.243516    2393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1336,"bootTime":1725141636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:22:52.243579    2393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:22:52.248128    2393 out.go:177] * [functional-221000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0831 15:22:52.253943    2393 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:22:52.254000    2393 notify.go:220] Checking for updates...
	I0831 15:22:52.261921    2393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:22:52.265936    2393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:22:52.268954    2393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:22:52.271943    2393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:22:52.274953    2393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:22:52.278246    2393 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:22:52.278520    2393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:22:52.282900    2393 out.go:177] * Using the qemu2 driver based on existing profile
	I0831 15:22:52.289959    2393 start.go:297] selected driver: qemu2
	I0831 15:22:52.289968    2393 start.go:901] validating driver "qemu2" against &{Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:22:52.290026    2393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:22:52.296931    2393 out.go:201] 
	W0831 15:22:52.300946    2393 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 15:22:52.303966    2393 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-221000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-221000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (154.133667ms)

                                                
                                                
-- stdout --
	* [functional-221000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 15:22:52.465545    2408 out.go:345] Setting OutFile to fd 1 ...
	I0831 15:22:52.465649    2408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:22:52.465653    2408 out.go:358] Setting ErrFile to fd 2...
	I0831 15:22:52.465656    2408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 15:22:52.465789    2408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
	I0831 15:22:52.467008    2408 out.go:352] Setting JSON to false
	I0831 15:22:52.486052    2408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1336,"bootTime":1725141636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0831 15:22:52.486148    2408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0831 15:22:52.490944    2408 out.go:177] * [functional-221000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0831 15:22:52.499841    2408 notify.go:220] Checking for updates...
	I0831 15:22:52.503970    2408 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 15:22:52.511951    2408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	I0831 15:22:52.519995    2408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0831 15:22:52.529955    2408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 15:22:52.536890    2408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	I0831 15:22:52.546869    2408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 15:22:52.551280    2408 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 15:22:52.551523    2408 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 15:22:52.557993    2408 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0831 15:22:52.569923    2408 start.go:297] selected driver: qemu2
	I0831 15:22:52.569936    2408 start.go:901] validating driver "qemu2" against &{Name:functional-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19530/minikube-v1.33.1-1724862017-19530-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 15:22:52.569999    2408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 15:22:52.576964    2408 out.go:201] 
	W0831 15:22:52.579971    2408 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 15:22:52.583794    2408 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [ae0ec11c-af2f-4c29-8beb-e7d90e24ed30] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009495458s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-221000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-221000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-221000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-221000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [1a39d87e-9892-4cf4-91d0-f7023dfd0c17] Pending
helpers_test.go:345: "sp-pod" [1a39d87e-9892-4cf4-91d0-f7023dfd0c17] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [1a39d87e-9892-4cf4-91d0-f7023dfd0c17] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006738959s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-221000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-221000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-221000 delete -f testdata/storage-provisioner/pod.yaml: (1.065990208s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-221000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [c31e88bc-5f56-468e-bd84-bf76f5cc4274] Pending
E0831 15:22:37.606239    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "sp-pod" [c31e88bc-5f56-468e-bd84-bf76f5cc4274] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [c31e88bc-5f56-468e-bd84-bf76f5cc4274] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010185834s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-221000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh -n functional-221000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cp functional-221000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2188834223/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh -n functional-221000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh -n functional-221000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1432/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /etc/test/nested/copy/1432/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1432.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /etc/ssl/certs/1432.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1432.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /usr/share/ca-certificates/1432.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /etc/ssl/certs/14322.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14322.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /usr/share/ca-certificates/14322.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-221000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh "sudo systemctl is-active crio": exit status 1 (122.770833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-221000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-221000
docker.io/kicbase/echo-server:functional-221000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-221000 image ls --format short --alsologtostderr:
I0831 15:22:53.008616    2428 out.go:345] Setting OutFile to fd 1 ...
I0831 15:22:53.008780    2428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:53.008784    2428 out.go:358] Setting ErrFile to fd 2...
I0831 15:22:53.008786    2428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:53.008925    2428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:22:53.009331    2428 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:53.009391    2428 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:53.010192    2428 ssh_runner.go:195] Run: systemctl --version
I0831 15:22:53.010206    2428 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
I0831 15:22:53.047295    2428 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-221000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| docker.io/kicbase/echo-server               | functional-221000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-221000 | f4edc572175ae | 1.41MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-221000 | 09f81dcb522de | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-221000 image ls --format table --alsologtostderr:
I0831 15:22:54.978736    2441 out.go:345] Setting OutFile to fd 1 ...
I0831 15:22:54.978906    2441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.978910    2441 out.go:358] Setting ErrFile to fd 2...
I0831 15:22:54.978912    2441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.979037    2441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:22:54.979480    2441 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.979550    2441 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.980438    2441 ssh_runner.go:195] Run: systemctl --version
I0831 15:22:54.980446    2441 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
I0831 15:22:55.010785    2441 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/31 15:22:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-221000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"09f81dcb522de86cdbda8896af5ae1813bfbb515e25171474b9eaf384e04d6e5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-221000"],"size":"30"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa
9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"f4edc572175ae135ff48b4a56cab34d0b5258d90a3bd95f9e65b2ae868581223","repoDigests":[],"repoTags":["localhost/my-image:functional-221000"],"size":"1410000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/c
oredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-221000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-221000 image ls --format json --alsologtostderr:
I0831 15:22:54.899764    2439 out.go:345] Setting OutFile to fd 1 ...
I0831 15:22:54.899914    2439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.899918    2439 out.go:358] Setting ErrFile to fd 2...
I0831 15:22:54.899920    2439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.900040    2439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:22:54.900433    2439 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.900495    2439 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.901384    2439 ssh_runner.go:195] Run: systemctl --version
I0831 15:22:54.901394    2439 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
I0831 15:22:54.933808    2439 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-221000 image ls --format yaml --alsologtostderr:
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-221000
size: "4780000"
- id: 09f81dcb522de86cdbda8896af5ae1813bfbb515e25171474b9eaf384e04d6e5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-221000
size: "30"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f4edc572175ae135ff48b4a56cab34d0b5258d90a3bd95f9e65b2ae868581223
repoDigests: []
repoTags:
- localhost/my-image:functional-221000
size: "1410000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-221000 image ls --format yaml --alsologtostderr:
I0831 15:22:54.804041    2437 out.go:345] Setting OutFile to fd 1 ...
I0831 15:22:54.804424    2437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.804429    2437 out.go:358] Setting ErrFile to fd 2...
I0831 15:22:54.804432    2437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:54.804629    2437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:22:54.805293    2437 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.805380    2437 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:54.806278    2437 ssh_runner.go:195] Run: systemctl --version
I0831 15:22:54.806287    2437 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
I0831 15:22:54.835937    2437 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh pgrep buildkitd: exit status 1 (66.658375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image build -t localhost/my-image:functional-221000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-221000 image build -t localhost/my-image:functional-221000 testdata/build --alsologtostderr: (1.565708792s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-221000 image build -t localhost/my-image:functional-221000 testdata/build --alsologtostderr:
I0831 15:22:53.169085    2432 out.go:345] Setting OutFile to fd 1 ...
I0831 15:22:53.169360    2432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:53.169365    2432 out.go:358] Setting ErrFile to fd 2...
I0831 15:22:53.169367    2432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 15:22:53.169540    2432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18943-972/.minikube/bin
I0831 15:22:53.169977    2432 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:53.170725    2432 config.go:182] Loaded profile config "functional-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 15:22:53.171608    2432 ssh_runner.go:195] Run: systemctl --version
I0831 15:22:53.171621    2432 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18943-972/.minikube/machines/functional-221000/id_rsa Username:docker}
I0831 15:22:53.200294    2432 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.567849889.tar
I0831 15:22:53.200356    2432 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 15:22:53.204111    2432 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.567849889.tar
I0831 15:22:53.205687    2432 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.567849889.tar: stat -c "%s %y" /var/lib/minikube/build/build.567849889.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.567849889.tar': No such file or directory
I0831 15:22:53.205706    2432 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.567849889.tar --> /var/lib/minikube/build/build.567849889.tar (3072 bytes)
I0831 15:22:53.214619    2432 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.567849889
I0831 15:22:53.218710    2432 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.567849889 -xf /var/lib/minikube/build/build.567849889.tar
I0831 15:22:53.222419    2432 docker.go:360] Building image: /var/lib/minikube/build/build.567849889
I0831 15:22:53.222475    2432 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-221000 /var/lib/minikube/build/build.567849889
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:f4edc572175ae135ff48b4a56cab34d0b5258d90a3bd95f9e65b2ae868581223 done
#8 naming to localhost/my-image:functional-221000 done
#8 DONE 0.0s
I0831 15:22:54.686147    2432 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-221000 /var/lib/minikube/build/build.567849889: (1.463693042s)
I0831 15:22:54.686235    2432 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.567849889
I0831 15:22:54.691294    2432 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.567849889.tar
I0831 15:22:54.694988    2432 build_images.go:217] Built localhost/my-image:functional-221000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.567849889.tar
I0831 15:22:54.695005    2432 build_images.go:133] succeeded building to: functional-221000
I0831 15:22:54.695009    2432 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.679034209s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-221000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-221000 docker-env) && out/minikube-darwin-arm64 status -p functional-221000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-221000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 update-context --alsologtostderr -v=2
E0831 15:22:52.971610    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-221000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-221000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-64b4f8f9ff-rmffc" [154dd718-9dfb-4b9c-b955-928c716c875d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-64b4f8f9ff-rmffc" [154dd718-9dfb-4b9c-b955-928c716c875d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.011163959s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image load --daemon kicbase/echo-server:functional-221000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image load --daemon kicbase/echo-server:functional-221000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-221000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image load --daemon kicbase/echo-server:functional-221000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image save kicbase/echo-server:functional-221000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image rm kicbase/echo-server:functional-221000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-221000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 image save --daemon kicbase/echo-server:functional-221000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-221000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr] ...
helpers_test.go:509: unable to kill pid 2245: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-221000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:345: "nginx-svc" [1c4eda75-f006-41fe-9efc-bd4dd5e9ea74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx-svc" [1c4eda75-f006-41fe-9efc-bd4dd5e9ea74] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00871925s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service list -o json
functional_test.go:1494: Took "86.741875ms" to run "out/minikube-darwin-arm64 -p functional-221000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32470
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32470
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-221000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.104.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-221000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "90.039042ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.840583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "86.364792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.459292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725142965018430000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725142965018430000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725142965018430000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001/test-1725142965018430000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T /mount-9p | grep 9p": (1.019619625s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:22 test-1725142965018430000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh cat /mount-9p/test-1725142965018430000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-221000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [05265cce-6cbb-4b51-b5dd-82c7c3475f7d] Pending
helpers_test.go:345: "busybox-mount" [05265cce-6cbb-4b51-b5dd-82c7c3475f7d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [05265cce-6cbb-4b51-b5dd-82c7c3475f7d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [05265cce-6cbb-4b51-b5dd-82c7c3475f7d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007936291s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-221000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port687838321/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1321680966/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.440084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1321680966/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh "sudo umount -f /mount-9p": exit status 1 (62.251334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-221000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1321680966/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T" /mount1: exit status 1 (77.896208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-221000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-221000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-221000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2682889019/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-221000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-221000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-221000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-166000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0831 15:23:13.454686    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:23:54.413636    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:25:16.334059    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/addons-492000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-166000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m6.424238583s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (186.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-166000 -- rollout status deployment/busybox: (2.883676125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-5fl7x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-hz4fm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-nsh4d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-5fl7x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-hz4fm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-nsh4d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-5fl7x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-hz4fm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-nsh4d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-5fl7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-5fl7x -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-hz4fm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-hz4fm -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-nsh4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-166000 -- exec busybox-7dff88458-nsh4d -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-166000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-166000 -v=7 --alsologtostderr: (53.529229292s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-166000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 status --output json -v=7 --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp testdata/cp-test.txt ha-166000:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1880586527/001/cp-test_ha-166000.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000:/home/docker/cp-test.txt ha-166000-m02:/home/docker/cp-test_ha-166000_ha-166000-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test_ha-166000_ha-166000-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000:/home/docker/cp-test.txt ha-166000-m03:/home/docker/cp-test_ha-166000_ha-166000-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test_ha-166000_ha-166000-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000:/home/docker/cp-test.txt ha-166000-m04:/home/docker/cp-test_ha-166000_ha-166000-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test_ha-166000_ha-166000-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp testdata/cp-test.txt ha-166000-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1880586527/001/cp-test_ha-166000-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m02:/home/docker/cp-test.txt ha-166000:/home/docker/cp-test_ha-166000-m02_ha-166000.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test_ha-166000-m02_ha-166000.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m02:/home/docker/cp-test.txt ha-166000-m03:/home/docker/cp-test_ha-166000-m02_ha-166000-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test_ha-166000-m02_ha-166000-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m02:/home/docker/cp-test.txt ha-166000-m04:/home/docker/cp-test_ha-166000-m02_ha-166000-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test_ha-166000-m02_ha-166000-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp testdata/cp-test.txt ha-166000-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1880586527/001/cp-test_ha-166000-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test.txt"
E0831 15:27:07.253698    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:07.261303    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:07.273655    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m03:/home/docker/cp-test.txt ha-166000:/home/docker/cp-test_ha-166000-m03_ha-166000.txt
E0831 15:27:07.295994    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
E0831 15:27:07.338151    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test.txt"
E0831 15:27:07.421687    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test_ha-166000-m03_ha-166000.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m03:/home/docker/cp-test.txt ha-166000-m02:/home/docker/cp-test_ha-166000-m03_ha-166000-m02.txt
E0831 15:27:07.584780    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test_ha-166000-m03_ha-166000-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m03:/home/docker/cp-test.txt ha-166000-m04:/home/docker/cp-test_ha-166000-m03_ha-166000-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test.txt"
E0831 15:27:07.908357    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test_ha-166000-m03_ha-166000-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp testdata/cp-test.txt ha-166000-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1880586527/001/cp-test_ha-166000-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m04:/home/docker/cp-test.txt ha-166000:/home/docker/cp-test_ha-166000-m04_ha-166000.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000 "sudo cat /home/docker/cp-test_ha-166000-m04_ha-166000.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m04:/home/docker/cp-test.txt ha-166000-m02:/home/docker/cp-test_ha-166000-m04_ha-166000-m02.txt
E0831 15:27:08.549845    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m02 "sudo cat /home/docker/cp-test_ha-166000-m04_ha-166000-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 cp ha-166000-m04:/home/docker/cp-test.txt ha-166000-m03:/home/docker/cp-test_ha-166000-m04_ha-166000-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-arm64 -p ha-166000 ssh -n ha-166000-m03 "sudo cat /home/docker/cp-test_ha-166000-m04_ha-166000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.894754708s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-629000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-629000 --output=json --user=testUser: (3.382248875s)
--- PASS: TestJSONOutput/stop/Command (3.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-690000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-690000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.734792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7860aa27-b9a4-42bd-8b93-1fc9f40fb5dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-690000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a1b95a9-1b48-4cb6-881f-e61f4e5fa80a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"fb64dcbe-6fb7-4002-afc5-86410f5f27b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig"}}
	{"specversion":"1.0","id":"0c5ed13b-0ee5-4e49-bad1-048d901898f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7d360841-2f95-4599-b393-01cc6c89ebe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6795f142-9ddd-46e2-88b5-0dd30608824e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube"}}
	{"specversion":"1.0","id":"55c81070-20b7-4310-b6e9-36a222a3cc9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"69b62a70-e0f6-4964-b61a-69e11f2c7346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-690000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-690000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-246000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.8855ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18943-972/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18943-972/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-246000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-246000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.464208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-246000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-246000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.749005166s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.696627583s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-246000
E0831 16:02:07.182172    1432 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/18943-972/.minikube/profiles/functional-221000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-246000: (2.929518208s)
--- PASS: TestNoKubernetes/serial/Stop (2.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-246000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-246000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.479833ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-246000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-246000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-797000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-797000 --alsologtostderr -v=3: (2.071579167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-797000 -n old-k8s-version-797000: exit status 7 (46.997792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-797000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-231000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-233000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-233000 --alsologtostderr -v=3: (3.301220958s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-233000 -n no-preload-233000: exit status 7 (53.500208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-233000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-169000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-169000 --alsologtostderr -v=3: (3.055209167s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-169000 -n embed-certs-169000: exit status 7 (56.75275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-169000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-646000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-646000 --alsologtostderr -v=3: (1.758484083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (55.227709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-646000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-648000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-648000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-648000 --alsologtostderr -v=3: (3.302371875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-648000 -n newest-cni-648000: exit status 7 (57.300042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-648000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/275)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork (0s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork
multinetwork_test.go:43: running with runtime:docker goos:darwin goarch:arm64
multinetwork_test.go:45: skipping: only docker driver supported
--- SKIP: TestContainerIPsMultiNetwork (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-571000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-571000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-571000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-571000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571000"

                                                
                                                
----------------------- debugLogs end: cilium-571000 [took: 2.364716291s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-571000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-571000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-956000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-956000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard